Test Report: KVM_Linux_crio 20062

                    
                      964562641276d457941dbb6d7cf4aa7e43312d02:2024-12-10:37415
                    
                

Test fail (32/314)

Order failed test Duration
36 TestAddons/parallel/Ingress 151.52
38 TestAddons/parallel/MetricsServer 364.54
47 TestAddons/StoppedEnableDisable 154.35
166 TestMultiControlPlane/serial/StopSecondaryNode 141.4
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.49
168 TestMultiControlPlane/serial/RestartSecondaryNode 6.29
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.43
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 385.72
173 TestMultiControlPlane/serial/StopCluster 142.03
233 TestMultiNode/serial/RestartKeepsNodes 323.4
235 TestMultiNode/serial/StopMultiNode 144.92
242 TestPreload 161.96
250 TestKubernetesUpgrade 391.14
285 TestPause/serial/SecondStartNoReconfiguration 53.94
287 TestStartStop/group/old-k8s-version/serial/FirstStart 270.37
297 TestStartStop/group/no-preload/serial/Stop 139.09
299 TestStartStop/group/embed-certs/serial/Stop 138.97
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.02
303 TestStartStop/group/old-k8s-version/serial/DeployApp 0.47
304 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 97.42
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
306 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
312 TestStartStop/group/old-k8s-version/serial/SecondStart 726.26
314 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.97
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.26
316 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.49
317 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.17
318 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 378.9
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 323.64
320 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 455.68
321 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 160.37
x
+
TestAddons/parallel/Ingress (151.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-327804 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-327804 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-327804 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [52fd3c65-4d51-4779-8a7a-3c2bcae19f57] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [52fd3c65-4d51-4779-8a7a-3c2bcae19f57] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.00494638s
I1209 23:47:13.184849   86296 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-327804 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-327804 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.865502088s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-327804 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-327804 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.22
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-327804 -n addons-327804
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-327804 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-327804 logs -n 25: (1.147338719s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC | 09 Dec 24 23:43 UTC |
	| delete  | -p download-only-539681                                                                     | download-only-539681 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC | 09 Dec 24 23:43 UTC |
	| delete  | -p download-only-279229                                                                     | download-only-279229 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC | 09 Dec 24 23:43 UTC |
	| delete  | -p download-only-539681                                                                     | download-only-539681 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC | 09 Dec 24 23:43 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-419481 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC |                     |
	|         | binary-mirror-419481                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41707                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-419481                                                                     | binary-mirror-419481 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC | 09 Dec 24 23:43 UTC |
	| addons  | disable dashboard -p                                                                        | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC |                     |
	|         | addons-327804                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC |                     |
	|         | addons-327804                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-327804 --wait=true                                                                | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC | 09 Dec 24 23:45 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-327804 addons disable                                                                | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:45 UTC | 09 Dec 24 23:45 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-327804 addons disable                                                                | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:45 UTC | 09 Dec 24 23:46 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | -p addons-327804                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-327804 addons disable                                                                | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-327804 ip                                                                            | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	| addons  | addons-327804 addons disable                                                                | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-327804 addons disable                                                                | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-327804 addons                                                                        | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-327804 ssh cat                                                                       | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | /opt/local-path-provisioner/pvc-d933e89a-c1b5-434b-bf3c-35e985eb04c2_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-327804 addons                                                                        | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-327804 addons disable                                                                | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-327804 addons                                                                        | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-327804 ssh curl -s                                                                   | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-327804 addons                                                                        | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-327804 addons                                                                        | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-327804 ip                                                                            | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:49 UTC | 09 Dec 24 23:49 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:43:40
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:43:40.797815   86928 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:43:40.797941   86928 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:43:40.797951   86928 out.go:358] Setting ErrFile to fd 2...
	I1209 23:43:40.797955   86928 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:43:40.798164   86928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1209 23:43:40.798829   86928 out.go:352] Setting JSON to false
	I1209 23:43:40.799678   86928 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5172,"bootTime":1733782649,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:43:40.799766   86928 start.go:139] virtualization: kvm guest
	I1209 23:43:40.801628   86928 out.go:177] * [addons-327804] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:43:40.803152   86928 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 23:43:40.803155   86928 notify.go:220] Checking for updates...
	I1209 23:43:40.804421   86928 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:43:40.805674   86928 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1209 23:43:40.806748   86928 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1209 23:43:40.807838   86928 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 23:43:40.808861   86928 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:43:40.810037   86928 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:43:40.840708   86928 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 23:43:40.841813   86928 start.go:297] selected driver: kvm2
	I1209 23:43:40.841833   86928 start.go:901] validating driver "kvm2" against <nil>
	I1209 23:43:40.841851   86928 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:43:40.842524   86928 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:43:40.842643   86928 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 23:43:40.856864   86928 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 23:43:40.856908   86928 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 23:43:40.857223   86928 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:43:40.857269   86928 cni.go:84] Creating CNI manager for ""
	I1209 23:43:40.857327   86928 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:43:40.857340   86928 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 23:43:40.857398   86928 start.go:340] cluster config:
	{Name:addons-327804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-327804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:43:40.857549   86928 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:43:40.859092   86928 out.go:177] * Starting "addons-327804" primary control-plane node in "addons-327804" cluster
	I1209 23:43:40.860222   86928 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:43:40.860249   86928 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 23:43:40.860268   86928 cache.go:56] Caching tarball of preloaded images
	I1209 23:43:40.860354   86928 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 23:43:40.860368   86928 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 23:43:40.860769   86928 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/config.json ...
	I1209 23:43:40.860796   86928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/config.json: {Name:mk75ac48819931541f6e8d216a32d3d7747b635e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:43:40.860941   86928 start.go:360] acquireMachinesLock for addons-327804: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 23:43:40.861012   86928 start.go:364] duration metric: took 55.128µs to acquireMachinesLock for "addons-327804"
	I1209 23:43:40.861038   86928 start.go:93] Provisioning new machine with config: &{Name:addons-327804 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-327804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 23:43:40.861090   86928 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 23:43:40.862489   86928 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1209 23:43:40.862647   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:43:40.862687   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:43:40.875854   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45893
	I1209 23:43:40.876367   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:43:40.877017   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:43:40.877043   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:43:40.877383   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:43:40.877557   86928 main.go:141] libmachine: (addons-327804) Calling .GetMachineName
	I1209 23:43:40.877674   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:43:40.877787   86928 start.go:159] libmachine.API.Create for "addons-327804" (driver="kvm2")
	I1209 23:43:40.877822   86928 client.go:168] LocalClient.Create starting
	I1209 23:43:40.877859   86928 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem
	I1209 23:43:40.954333   86928 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem
	I1209 23:43:41.072464   86928 main.go:141] libmachine: Running pre-create checks...
	I1209 23:43:41.072488   86928 main.go:141] libmachine: (addons-327804) Calling .PreCreateCheck
	I1209 23:43:41.072961   86928 main.go:141] libmachine: (addons-327804) Calling .GetConfigRaw
	I1209 23:43:41.073400   86928 main.go:141] libmachine: Creating machine...
	I1209 23:43:41.073412   86928 main.go:141] libmachine: (addons-327804) Calling .Create
	I1209 23:43:41.073541   86928 main.go:141] libmachine: (addons-327804) Creating KVM machine...
	I1209 23:43:41.074849   86928 main.go:141] libmachine: (addons-327804) DBG | found existing default KVM network
	I1209 23:43:41.075569   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:41.075394   86950 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002011f0}
	I1209 23:43:41.075593   86928 main.go:141] libmachine: (addons-327804) DBG | created network xml: 
	I1209 23:43:41.075603   86928 main.go:141] libmachine: (addons-327804) DBG | <network>
	I1209 23:43:41.075609   86928 main.go:141] libmachine: (addons-327804) DBG |   <name>mk-addons-327804</name>
	I1209 23:43:41.075615   86928 main.go:141] libmachine: (addons-327804) DBG |   <dns enable='no'/>
	I1209 23:43:41.075619   86928 main.go:141] libmachine: (addons-327804) DBG |   
	I1209 23:43:41.075625   86928 main.go:141] libmachine: (addons-327804) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1209 23:43:41.075635   86928 main.go:141] libmachine: (addons-327804) DBG |     <dhcp>
	I1209 23:43:41.075641   86928 main.go:141] libmachine: (addons-327804) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1209 23:43:41.075646   86928 main.go:141] libmachine: (addons-327804) DBG |     </dhcp>
	I1209 23:43:41.075651   86928 main.go:141] libmachine: (addons-327804) DBG |   </ip>
	I1209 23:43:41.075658   86928 main.go:141] libmachine: (addons-327804) DBG |   
	I1209 23:43:41.075663   86928 main.go:141] libmachine: (addons-327804) DBG | </network>
	I1209 23:43:41.075669   86928 main.go:141] libmachine: (addons-327804) DBG | 
	I1209 23:43:41.080831   86928 main.go:141] libmachine: (addons-327804) DBG | trying to create private KVM network mk-addons-327804 192.168.39.0/24...
	I1209 23:43:41.144777   86928 main.go:141] libmachine: (addons-327804) DBG | private KVM network mk-addons-327804 192.168.39.0/24 created
	I1209 23:43:41.144832   86928 main.go:141] libmachine: (addons-327804) Setting up store path in /home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804 ...
	I1209 23:43:41.144856   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:41.144754   86950 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1209 23:43:41.144875   86928 main.go:141] libmachine: (addons-327804) Building disk image from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 23:43:41.144981   86928 main.go:141] libmachine: (addons-327804) Downloading /home/jenkins/minikube-integration/20062-79135/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 23:43:41.414966   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:41.414844   86950 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa...
	I1209 23:43:41.750891   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:41.750756   86950 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/addons-327804.rawdisk...
	I1209 23:43:41.750921   86928 main.go:141] libmachine: (addons-327804) DBG | Writing magic tar header
	I1209 23:43:41.750929   86928 main.go:141] libmachine: (addons-327804) DBG | Writing SSH key tar header
	I1209 23:43:41.751004   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:41.750937   86950 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804 ...
	I1209 23:43:41.751065   86928 main.go:141] libmachine: (addons-327804) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804
	I1209 23:43:41.751091   86928 main.go:141] libmachine: (addons-327804) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804 (perms=drwx------)
	I1209 23:43:41.751112   86928 main.go:141] libmachine: (addons-327804) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines
	I1209 23:43:41.751124   86928 main.go:141] libmachine: (addons-327804) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines (perms=drwxr-xr-x)
	I1209 23:43:41.751137   86928 main.go:141] libmachine: (addons-327804) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube (perms=drwxr-xr-x)
	I1209 23:43:41.751143   86928 main.go:141] libmachine: (addons-327804) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135 (perms=drwxrwxr-x)
	I1209 23:43:41.751170   86928 main.go:141] libmachine: (addons-327804) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1209 23:43:41.751181   86928 main.go:141] libmachine: (addons-327804) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 23:43:41.751191   86928 main.go:141] libmachine: (addons-327804) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135
	I1209 23:43:41.751203   86928 main.go:141] libmachine: (addons-327804) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 23:43:41.751222   86928 main.go:141] libmachine: (addons-327804) Creating domain...
	I1209 23:43:41.751234   86928 main.go:141] libmachine: (addons-327804) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 23:43:41.751244   86928 main.go:141] libmachine: (addons-327804) DBG | Checking permissions on dir: /home/jenkins
	I1209 23:43:41.751256   86928 main.go:141] libmachine: (addons-327804) DBG | Checking permissions on dir: /home
	I1209 23:43:41.751275   86928 main.go:141] libmachine: (addons-327804) DBG | Skipping /home - not owner
	I1209 23:43:41.752397   86928 main.go:141] libmachine: (addons-327804) define libvirt domain using xml: 
	I1209 23:43:41.752427   86928 main.go:141] libmachine: (addons-327804) <domain type='kvm'>
	I1209 23:43:41.752435   86928 main.go:141] libmachine: (addons-327804)   <name>addons-327804</name>
	I1209 23:43:41.752440   86928 main.go:141] libmachine: (addons-327804)   <memory unit='MiB'>4000</memory>
	I1209 23:43:41.752445   86928 main.go:141] libmachine: (addons-327804)   <vcpu>2</vcpu>
	I1209 23:43:41.752451   86928 main.go:141] libmachine: (addons-327804)   <features>
	I1209 23:43:41.752458   86928 main.go:141] libmachine: (addons-327804)     <acpi/>
	I1209 23:43:41.752468   86928 main.go:141] libmachine: (addons-327804)     <apic/>
	I1209 23:43:41.752476   86928 main.go:141] libmachine: (addons-327804)     <pae/>
	I1209 23:43:41.752482   86928 main.go:141] libmachine: (addons-327804)     
	I1209 23:43:41.752493   86928 main.go:141] libmachine: (addons-327804)   </features>
	I1209 23:43:41.752503   86928 main.go:141] libmachine: (addons-327804)   <cpu mode='host-passthrough'>
	I1209 23:43:41.752533   86928 main.go:141] libmachine: (addons-327804)   
	I1209 23:43:41.752569   86928 main.go:141] libmachine: (addons-327804)   </cpu>
	I1209 23:43:41.752582   86928 main.go:141] libmachine: (addons-327804)   <os>
	I1209 23:43:41.752592   86928 main.go:141] libmachine: (addons-327804)     <type>hvm</type>
	I1209 23:43:41.752601   86928 main.go:141] libmachine: (addons-327804)     <boot dev='cdrom'/>
	I1209 23:43:41.752610   86928 main.go:141] libmachine: (addons-327804)     <boot dev='hd'/>
	I1209 23:43:41.752638   86928 main.go:141] libmachine: (addons-327804)     <bootmenu enable='no'/>
	I1209 23:43:41.752655   86928 main.go:141] libmachine: (addons-327804)   </os>
	I1209 23:43:41.752669   86928 main.go:141] libmachine: (addons-327804)   <devices>
	I1209 23:43:41.752684   86928 main.go:141] libmachine: (addons-327804)     <disk type='file' device='cdrom'>
	I1209 23:43:41.752699   86928 main.go:141] libmachine: (addons-327804)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/boot2docker.iso'/>
	I1209 23:43:41.752709   86928 main.go:141] libmachine: (addons-327804)       <target dev='hdc' bus='scsi'/>
	I1209 23:43:41.752724   86928 main.go:141] libmachine: (addons-327804)       <readonly/>
	I1209 23:43:41.752735   86928 main.go:141] libmachine: (addons-327804)     </disk>
	I1209 23:43:41.752748   86928 main.go:141] libmachine: (addons-327804)     <disk type='file' device='disk'>
	I1209 23:43:41.752764   86928 main.go:141] libmachine: (addons-327804)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 23:43:41.752784   86928 main.go:141] libmachine: (addons-327804)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/addons-327804.rawdisk'/>
	I1209 23:43:41.752794   86928 main.go:141] libmachine: (addons-327804)       <target dev='hda' bus='virtio'/>
	I1209 23:43:41.752800   86928 main.go:141] libmachine: (addons-327804)     </disk>
	I1209 23:43:41.752809   86928 main.go:141] libmachine: (addons-327804)     <interface type='network'>
	I1209 23:43:41.752819   86928 main.go:141] libmachine: (addons-327804)       <source network='mk-addons-327804'/>
	I1209 23:43:41.752833   86928 main.go:141] libmachine: (addons-327804)       <model type='virtio'/>
	I1209 23:43:41.752844   86928 main.go:141] libmachine: (addons-327804)     </interface>
	I1209 23:43:41.752855   86928 main.go:141] libmachine: (addons-327804)     <interface type='network'>
	I1209 23:43:41.752868   86928 main.go:141] libmachine: (addons-327804)       <source network='default'/>
	I1209 23:43:41.752875   86928 main.go:141] libmachine: (addons-327804)       <model type='virtio'/>
	I1209 23:43:41.752892   86928 main.go:141] libmachine: (addons-327804)     </interface>
	I1209 23:43:41.752907   86928 main.go:141] libmachine: (addons-327804)     <serial type='pty'>
	I1209 23:43:41.752919   86928 main.go:141] libmachine: (addons-327804)       <target port='0'/>
	I1209 23:43:41.752928   86928 main.go:141] libmachine: (addons-327804)     </serial>
	I1209 23:43:41.752936   86928 main.go:141] libmachine: (addons-327804)     <console type='pty'>
	I1209 23:43:41.752949   86928 main.go:141] libmachine: (addons-327804)       <target type='serial' port='0'/>
	I1209 23:43:41.752960   86928 main.go:141] libmachine: (addons-327804)     </console>
	I1209 23:43:41.752972   86928 main.go:141] libmachine: (addons-327804)     <rng model='virtio'>
	I1209 23:43:41.752979   86928 main.go:141] libmachine: (addons-327804)       <backend model='random'>/dev/random</backend>
	I1209 23:43:41.752987   86928 main.go:141] libmachine: (addons-327804)     </rng>
	I1209 23:43:41.752995   86928 main.go:141] libmachine: (addons-327804)     
	I1209 23:43:41.753005   86928 main.go:141] libmachine: (addons-327804)     
	I1209 23:43:41.753013   86928 main.go:141] libmachine: (addons-327804)   </devices>
	I1209 23:43:41.753022   86928 main.go:141] libmachine: (addons-327804) </domain>
	I1209 23:43:41.753031   86928 main.go:141] libmachine: (addons-327804) 
	I1209 23:43:41.756834   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:99:f1:eb in network default
	I1209 23:43:41.757484   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:41.757499   86928 main.go:141] libmachine: (addons-327804) Ensuring networks are active...
	I1209 23:43:41.758125   86928 main.go:141] libmachine: (addons-327804) Ensuring network default is active
	I1209 23:43:41.758480   86928 main.go:141] libmachine: (addons-327804) Ensuring network mk-addons-327804 is active
	I1209 23:43:41.759017   86928 main.go:141] libmachine: (addons-327804) Getting domain xml...
	I1209 23:43:41.759722   86928 main.go:141] libmachine: (addons-327804) Creating domain...
	I1209 23:43:42.926326   86928 main.go:141] libmachine: (addons-327804) Waiting to get IP...
	I1209 23:43:42.927176   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:42.927507   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:42.927535   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:42.927485   86950 retry.go:31] will retry after 270.923204ms: waiting for machine to come up
	I1209 23:43:43.200163   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:43.200573   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:43.200598   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:43.200560   86950 retry.go:31] will retry after 363.249732ms: waiting for machine to come up
	I1209 23:43:43.565030   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:43.565407   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:43.565432   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:43.565376   86950 retry.go:31] will retry after 406.688542ms: waiting for machine to come up
	I1209 23:43:43.973817   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:43.974220   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:43.974250   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:43.974166   86950 retry.go:31] will retry after 504.435555ms: waiting for machine to come up
	I1209 23:43:44.479835   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:44.480175   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:44.480204   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:44.480127   86950 retry.go:31] will retry after 630.106447ms: waiting for machine to come up
	I1209 23:43:45.111920   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:45.112378   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:45.112403   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:45.112329   86950 retry.go:31] will retry after 841.474009ms: waiting for machine to come up
	I1209 23:43:45.954929   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:45.955348   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:45.955377   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:45.955312   86950 retry.go:31] will retry after 945.238556ms: waiting for machine to come up
	I1209 23:43:46.902593   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:46.902917   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:46.902946   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:46.902874   86950 retry.go:31] will retry after 1.369231385s: waiting for machine to come up
	I1209 23:43:48.273670   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:48.274128   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:48.274160   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:48.274075   86950 retry.go:31] will retry after 1.549923986s: waiting for machine to come up
	I1209 23:43:49.825784   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:49.826227   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:49.826250   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:49.826161   86950 retry.go:31] will retry after 2.038935598s: waiting for machine to come up
	I1209 23:43:51.866265   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:51.866767   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:51.866795   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:51.866712   86950 retry.go:31] will retry after 2.246478528s: waiting for machine to come up
	I1209 23:43:54.116049   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:54.116426   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:54.116449   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:54.116371   86950 retry.go:31] will retry after 3.260771273s: waiting for machine to come up
	I1209 23:43:57.379356   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:57.379779   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:57.379802   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:57.379739   86950 retry.go:31] will retry after 4.229679028s: waiting for machine to come up
	I1209 23:44:01.610807   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:01.611231   86928 main.go:141] libmachine: (addons-327804) Found IP for machine: 192.168.39.22
	I1209 23:44:01.611267   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has current primary IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:01.611280   86928 main.go:141] libmachine: (addons-327804) Reserving static IP address...
	I1209 23:44:01.611660   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find host DHCP lease matching {name: "addons-327804", mac: "52:54:00:6e:5b:83", ip: "192.168.39.22"} in network mk-addons-327804
	I1209 23:44:01.681860   86928 main.go:141] libmachine: (addons-327804) Reserved static IP address: 192.168.39.22
	I1209 23:44:01.681893   86928 main.go:141] libmachine: (addons-327804) Waiting for SSH to be available...
	I1209 23:44:01.681902   86928 main.go:141] libmachine: (addons-327804) DBG | Getting to WaitForSSH function...
	I1209 23:44:01.684772   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:01.685211   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:01.685243   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:01.685412   86928 main.go:141] libmachine: (addons-327804) DBG | Using SSH client type: external
	I1209 23:44:01.685437   86928 main.go:141] libmachine: (addons-327804) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa (-rw-------)
	I1209 23:44:01.685471   86928 main.go:141] libmachine: (addons-327804) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:44:01.685485   86928 main.go:141] libmachine: (addons-327804) DBG | About to run SSH command:
	I1209 23:44:01.685501   86928 main.go:141] libmachine: (addons-327804) DBG | exit 0
	I1209 23:44:01.814171   86928 main.go:141] libmachine: (addons-327804) DBG | SSH cmd err, output: <nil>: 
	I1209 23:44:01.814483   86928 main.go:141] libmachine: (addons-327804) KVM machine creation complete!
	I1209 23:44:01.814883   86928 main.go:141] libmachine: (addons-327804) Calling .GetConfigRaw
	I1209 23:44:01.815500   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:01.815690   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:01.815796   86928 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 23:44:01.815819   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:01.817177   86928 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 23:44:01.817195   86928 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 23:44:01.817202   86928 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 23:44:01.817210   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:01.819407   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:01.819751   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:01.819777   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:01.819904   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:01.820083   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:01.820228   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:01.820336   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:01.820458   86928 main.go:141] libmachine: Using SSH client type: native
	I1209 23:44:01.820694   86928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1209 23:44:01.820705   86928 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 23:44:01.929249   86928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:44:01.929278   86928 main.go:141] libmachine: Detecting the provisioner...
	I1209 23:44:01.929285   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:01.931934   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:01.932282   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:01.932311   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:01.932490   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:01.932695   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:01.932846   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:01.932964   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:01.933095   86928 main.go:141] libmachine: Using SSH client type: native
	I1209 23:44:01.933272   86928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1209 23:44:01.933283   86928 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 23:44:02.042800   86928 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 23:44:02.042869   86928 main.go:141] libmachine: found compatible host: buildroot
	I1209 23:44:02.042878   86928 main.go:141] libmachine: Provisioning with buildroot...
	I1209 23:44:02.042897   86928 main.go:141] libmachine: (addons-327804) Calling .GetMachineName
	I1209 23:44:02.043162   86928 buildroot.go:166] provisioning hostname "addons-327804"
	I1209 23:44:02.043195   86928 main.go:141] libmachine: (addons-327804) Calling .GetMachineName
	I1209 23:44:02.043431   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:02.046239   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:02.046727   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:02.046756   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:02.046931   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:02.047130   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:02.047290   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:02.047408   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:02.047607   86928 main.go:141] libmachine: Using SSH client type: native
	I1209 23:44:02.047822   86928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1209 23:44:02.047836   86928 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-327804 && echo "addons-327804" | sudo tee /etc/hostname
	I1209 23:44:02.171028   86928 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-327804
	
	I1209 23:44:02.171070   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:02.173742   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:02.174068   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:02.174102   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:02.174315   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:02.174510   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:02.174708   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:02.174870   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:02.175042   86928 main.go:141] libmachine: Using SSH client type: native
	I1209 23:44:02.175264   86928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1209 23:44:02.175282   86928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-327804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-327804/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-327804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:44:02.295301   86928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:44:02.295339   86928 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1209 23:44:02.295376   86928 buildroot.go:174] setting up certificates
	I1209 23:44:02.295389   86928 provision.go:84] configureAuth start
	I1209 23:44:02.295400   86928 main.go:141] libmachine: (addons-327804) Calling .GetMachineName
	I1209 23:44:02.295707   86928 main.go:141] libmachine: (addons-327804) Calling .GetIP
	I1209 23:44:02.298422   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:02.298771   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:02.298802   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:02.298911   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:02.301005   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:02.301320   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:02.301349   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:02.301510   86928 provision.go:143] copyHostCerts
	I1209 23:44:02.301603   86928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1209 23:44:02.301776   86928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1209 23:44:02.301888   86928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1209 23:44:02.302051   86928 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.addons-327804 san=[127.0.0.1 192.168.39.22 addons-327804 localhost minikube]
	I1209 23:44:02.392285   86928 provision.go:177] copyRemoteCerts
	I1209 23:44:02.392358   86928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:44:02.392385   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:02.395299   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:02.395647   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:02.395676   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:02.395899   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:02.396075   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:02.396234   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:02.396368   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:02.479905   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 23:44:02.502117   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 23:44:02.523286   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 23:44:02.546305   86928 provision.go:87] duration metric: took 250.901798ms to configureAuth
	I1209 23:44:02.546339   86928 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:44:02.546495   86928 config.go:182] Loaded profile config "addons-327804": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:44:02.546618   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:02.549341   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:02.549788   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:02.549811   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:02.549945   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:02.550137   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:02.550291   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:02.550455   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:02.550621   86928 main.go:141] libmachine: Using SSH client type: native
	I1209 23:44:02.550834   86928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1209 23:44:02.550856   86928 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:44:03.099509   86928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:44:03.099536   86928 main.go:141] libmachine: Checking connection to Docker...
	I1209 23:44:03.099544   86928 main.go:141] libmachine: (addons-327804) Calling .GetURL
	I1209 23:44:03.100900   86928 main.go:141] libmachine: (addons-327804) DBG | Using libvirt version 6000000
	I1209 23:44:03.103437   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.103743   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:03.103772   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.103964   86928 main.go:141] libmachine: Docker is up and running!
	I1209 23:44:03.103976   86928 main.go:141] libmachine: Reticulating splines...
	I1209 23:44:03.103984   86928 client.go:171] duration metric: took 22.226152223s to LocalClient.Create
	I1209 23:44:03.104006   86928 start.go:167] duration metric: took 22.226220642s to libmachine.API.Create "addons-327804"
	I1209 23:44:03.104024   86928 start.go:293] postStartSetup for "addons-327804" (driver="kvm2")
	I1209 23:44:03.104036   86928 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:44:03.104053   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:03.104257   86928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:44:03.104286   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:03.106425   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.106773   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:03.106801   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.106947   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:03.107102   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:03.107246   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:03.107367   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:03.192050   86928 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:44:03.195674   86928 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:44:03.195701   86928 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1209 23:44:03.195778   86928 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1209 23:44:03.195806   86928 start.go:296] duration metric: took 91.77425ms for postStartSetup
	I1209 23:44:03.195842   86928 main.go:141] libmachine: (addons-327804) Calling .GetConfigRaw
	I1209 23:44:03.214336   86928 main.go:141] libmachine: (addons-327804) Calling .GetIP
	I1209 23:44:03.216753   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.217097   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:03.217125   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.217379   86928 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/config.json ...
	I1209 23:44:03.278348   86928 start.go:128] duration metric: took 22.417241644s to createHost
	I1209 23:44:03.278391   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:03.280868   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.281165   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:03.281215   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.281329   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:03.281538   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:03.281690   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:03.281829   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:03.281997   86928 main.go:141] libmachine: Using SSH client type: native
	I1209 23:44:03.282175   86928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1209 23:44:03.282195   86928 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:44:03.394890   86928 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733787843.369809494
	
	I1209 23:44:03.394926   86928 fix.go:216] guest clock: 1733787843.369809494
	I1209 23:44:03.394934   86928 fix.go:229] Guest: 2024-12-09 23:44:03.369809494 +0000 UTC Remote: 2024-12-09 23:44:03.278372278 +0000 UTC m=+22.516027277 (delta=91.437216ms)
	I1209 23:44:03.394979   86928 fix.go:200] guest clock delta is within tolerance: 91.437216ms
	I1209 23:44:03.394993   86928 start.go:83] releasing machines lock for "addons-327804", held for 22.533968839s
	I1209 23:44:03.395016   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:03.395271   86928 main.go:141] libmachine: (addons-327804) Calling .GetIP
	I1209 23:44:03.397874   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.398210   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:03.398243   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.398418   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:03.398862   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:03.399024   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:03.399110   86928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:44:03.399151   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:03.399183   86928 ssh_runner.go:195] Run: cat /version.json
	I1209 23:44:03.399208   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:03.401550   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.401771   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.401912   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:03.401938   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.402080   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:03.402095   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:03.402106   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.402268   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:03.402285   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:03.402434   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:03.402494   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:03.402636   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:03.402640   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:03.402759   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:03.503405   86928 ssh_runner.go:195] Run: systemctl --version
	I1209 23:44:03.509148   86928 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:44:04.143482   86928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:44:04.149978   86928 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:44:04.150058   86928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:44:04.164249   86928 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:44:04.164289   86928 start.go:495] detecting cgroup driver to use...
	I1209 23:44:04.164357   86928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:44:04.179572   86928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:44:04.192217   86928 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:44:04.192263   86928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:44:04.204386   86928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:44:04.216516   86928 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:44:04.330735   86928 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:44:04.470835   86928 docker.go:233] disabling docker service ...
	I1209 23:44:04.470912   86928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:44:04.485544   86928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:44:04.497698   86928 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:44:04.633101   86928 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:44:04.742096   86928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:44:04.754394   86928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:44:04.770407   86928 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:44:04.770460   86928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:04.779547   86928 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:44:04.779597   86928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:04.788850   86928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:04.797834   86928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:04.806902   86928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:44:04.816191   86928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:04.825058   86928 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:04.839776   86928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:04.848904   86928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:44:04.857138   86928 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:44:04.857180   86928 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:44:04.869011   86928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:44:04.877184   86928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:44:04.994409   86928 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:44:05.083715   86928 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:44:05.083806   86928 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:44:05.088015   86928 start.go:563] Will wait 60s for crictl version
	I1209 23:44:05.088067   86928 ssh_runner.go:195] Run: which crictl
	I1209 23:44:05.091453   86928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:44:05.125461   86928 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:44:05.125557   86928 ssh_runner.go:195] Run: crio --version
	I1209 23:44:05.150068   86928 ssh_runner.go:195] Run: crio --version
	I1209 23:44:05.176119   86928 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 23:44:05.177267   86928 main.go:141] libmachine: (addons-327804) Calling .GetIP
	I1209 23:44:05.180022   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:05.180478   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:05.180498   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:05.180737   86928 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 23:44:05.184334   86928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:44:05.195606   86928 kubeadm.go:883] updating cluster {Name:addons-327804 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-327804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:44:05.195708   86928 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:44:05.195745   86928 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:44:05.228699   86928 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 23:44:05.228757   86928 ssh_runner.go:195] Run: which lz4
	I1209 23:44:05.232192   86928 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 23:44:05.235703   86928 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 23:44:05.235730   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 23:44:06.400205   86928 crio.go:462] duration metric: took 1.168034461s to copy over tarball
	I1209 23:44:06.400280   86928 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 23:44:08.366438   86928 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.966106221s)
	I1209 23:44:08.366474   86928 crio.go:469] duration metric: took 1.966239202s to extract the tarball
	I1209 23:44:08.366483   86928 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 23:44:08.402189   86928 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:44:08.441003   86928 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:44:08.441026   86928 cache_images.go:84] Images are preloaded, skipping loading
	I1209 23:44:08.441034   86928 kubeadm.go:934] updating node { 192.168.39.22 8443 v1.31.2 crio true true} ...
	I1209 23:44:08.441172   86928 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-327804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-327804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:44:08.441249   86928 ssh_runner.go:195] Run: crio config
	I1209 23:44:08.483454   86928 cni.go:84] Creating CNI manager for ""
	I1209 23:44:08.483477   86928 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:44:08.483486   86928 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:44:08.483511   86928 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.22 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-327804 NodeName:addons-327804 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 23:44:08.483660   86928 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.22
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-327804"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.22"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.22"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:44:08.483734   86928 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 23:44:08.492640   86928 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:44:08.492708   86928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:44:08.501462   86928 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1209 23:44:08.516710   86928 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:44:08.530966   86928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I1209 23:44:08.545576   86928 ssh_runner.go:195] Run: grep 192.168.39.22	control-plane.minikube.internal$ /etc/hosts
	I1209 23:44:08.548900   86928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.22	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:44:08.559450   86928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:44:08.675550   86928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:44:08.691022   86928 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804 for IP: 192.168.39.22
	I1209 23:44:08.691046   86928 certs.go:194] generating shared ca certs ...
	I1209 23:44:08.691065   86928 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:08.691207   86928 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1209 23:44:08.942897   86928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt ...
	I1209 23:44:08.942927   86928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt: {Name:mkf2978b46aec7c7d5417e4710a2b718935c7d19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:08.943087   86928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key ...
	I1209 23:44:08.943098   86928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key: {Name:mkf00ec6ca7c6015e1d641e357e85d6ce1c54cdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:08.943170   86928 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1209 23:44:09.220123   86928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt ...
	I1209 23:44:09.220150   86928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt: {Name:mk56f9f07e96af9ce9147ed2b56a10686bae6c48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:09.220320   86928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key ...
	I1209 23:44:09.220334   86928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key: {Name:mkee0faa24d1c6cf590bf83ee394a96e62ebb923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:09.220403   86928 certs.go:256] generating profile certs ...
	I1209 23:44:09.220485   86928 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.key
	I1209 23:44:09.220501   86928 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt with IP's: []
	I1209 23:44:09.351458   86928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt ...
	I1209 23:44:09.351486   86928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: {Name:mk11cb4170a81b64e18c85f9fa97b4f70e4ea9fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:09.351635   86928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.key ...
	I1209 23:44:09.351645   86928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.key: {Name:mk8c977160e45fcfce49e593a5b4639fe8980487 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:09.351712   86928 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/apiserver.key.eb562c7d
	I1209 23:44:09.351729   86928 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/apiserver.crt.eb562c7d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.22]
	I1209 23:44:09.452195   86928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/apiserver.crt.eb562c7d ...
	I1209 23:44:09.452226   86928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/apiserver.crt.eb562c7d: {Name:mkafce7c2457e1bd7194ec34cf3560cce14a69fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:09.452380   86928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/apiserver.key.eb562c7d ...
	I1209 23:44:09.452392   86928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/apiserver.key.eb562c7d: {Name:mkf3b659da29c5208a8f2793c35495cfa2f39e2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:09.452469   86928 certs.go:381] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/apiserver.crt.eb562c7d -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/apiserver.crt
	I1209 23:44:09.452543   86928 certs.go:385] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/apiserver.key.eb562c7d -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/apiserver.key
	I1209 23:44:09.452588   86928 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/proxy-client.key
	I1209 23:44:09.452606   86928 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/proxy-client.crt with IP's: []
	I1209 23:44:09.530678   86928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/proxy-client.crt ...
	I1209 23:44:09.530712   86928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/proxy-client.crt: {Name:mk5d5f84a2f92697814cfa67a696461679d0d719 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:09.530880   86928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/proxy-client.key ...
	I1209 23:44:09.530893   86928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/proxy-client.key: {Name:mk2fc5c1c90ecbd59db084f19e469dfa742178a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:09.531074   86928 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:44:09.531118   86928 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1209 23:44:09.531146   86928 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:44:09.531174   86928 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1209 23:44:09.531759   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:44:09.557306   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 23:44:09.579044   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:44:09.604914   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:44:09.626547   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 23:44:09.647592   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 23:44:09.668535   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:44:09.689536   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 23:44:09.710681   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:44:09.732012   86928 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:44:09.746842   86928 ssh_runner.go:195] Run: openssl version
	I1209 23:44:09.752203   86928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:44:09.761799   86928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:44:09.765922   86928 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:44:09.765975   86928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:44:09.771392   86928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:44:09.781109   86928 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:44:09.784743   86928 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 23:44:09.784795   86928 kubeadm.go:392] StartCluster: {Name:addons-327804 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-327804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:44:09.784893   86928 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:44:09.784936   86928 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:44:09.816553   86928 cri.go:89] found id: ""
	I1209 23:44:09.816640   86928 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:44:09.826151   86928 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:44:09.834916   86928 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:44:09.843339   86928 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:44:09.843360   86928 kubeadm.go:157] found existing configuration files:
	
	I1209 23:44:09.843404   86928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 23:44:09.851337   86928 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:44:09.851376   86928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:44:09.859578   86928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 23:44:09.867468   86928 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:44:09.867511   86928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:44:09.875698   86928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 23:44:09.883651   86928 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:44:09.883695   86928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:44:09.892038   86928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 23:44:09.899869   86928 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:44:09.899922   86928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:44:09.908140   86928 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 23:44:10.055322   86928 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 23:44:19.611197   86928 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 23:44:19.611269   86928 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 23:44:19.611398   86928 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 23:44:19.611524   86928 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 23:44:19.611616   86928 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 23:44:19.611668   86928 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 23:44:19.612939   86928 out.go:235]   - Generating certificates and keys ...
	I1209 23:44:19.613025   86928 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 23:44:19.613100   86928 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 23:44:19.613227   86928 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 23:44:19.613302   86928 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1209 23:44:19.613393   86928 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1209 23:44:19.613442   86928 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1209 23:44:19.613488   86928 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1209 23:44:19.613661   86928 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-327804 localhost] and IPs [192.168.39.22 127.0.0.1 ::1]
	I1209 23:44:19.613745   86928 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1209 23:44:19.613914   86928 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-327804 localhost] and IPs [192.168.39.22 127.0.0.1 ::1]
	I1209 23:44:19.614016   86928 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 23:44:19.614132   86928 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 23:44:19.614193   86928 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1209 23:44:19.614270   86928 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 23:44:19.614331   86928 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 23:44:19.614387   86928 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 23:44:19.614428   86928 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 23:44:19.614477   86928 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 23:44:19.614523   86928 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 23:44:19.614638   86928 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 23:44:19.614739   86928 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 23:44:19.616024   86928 out.go:235]   - Booting up control plane ...
	I1209 23:44:19.616144   86928 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 23:44:19.616252   86928 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 23:44:19.616342   86928 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 23:44:19.616495   86928 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 23:44:19.616618   86928 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 23:44:19.616682   86928 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 23:44:19.616867   86928 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 23:44:19.617011   86928 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 23:44:19.617102   86928 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.07041ms
	I1209 23:44:19.617193   86928 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 23:44:19.617282   86928 kubeadm.go:310] [api-check] The API server is healthy after 5.002351731s
	I1209 23:44:19.617455   86928 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 23:44:19.617594   86928 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 23:44:19.617651   86928 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 23:44:19.617885   86928 kubeadm.go:310] [mark-control-plane] Marking the node addons-327804 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 23:44:19.617976   86928 kubeadm.go:310] [bootstrap-token] Using token: 1dhh9t.u8r2jfyc7htbxy61
	I1209 23:44:19.620165   86928 out.go:235]   - Configuring RBAC rules ...
	I1209 23:44:19.620264   86928 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 23:44:19.620351   86928 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 23:44:19.620505   86928 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 23:44:19.620663   86928 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 23:44:19.620825   86928 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 23:44:19.620935   86928 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 23:44:19.621066   86928 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 23:44:19.621107   86928 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 23:44:19.621171   86928 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 23:44:19.621190   86928 kubeadm.go:310] 
	I1209 23:44:19.621269   86928 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 23:44:19.621279   86928 kubeadm.go:310] 
	I1209 23:44:19.621394   86928 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 23:44:19.621406   86928 kubeadm.go:310] 
	I1209 23:44:19.621438   86928 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 23:44:19.621521   86928 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 23:44:19.621593   86928 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 23:44:19.621604   86928 kubeadm.go:310] 
	I1209 23:44:19.621667   86928 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 23:44:19.621676   86928 kubeadm.go:310] 
	I1209 23:44:19.621712   86928 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 23:44:19.621718   86928 kubeadm.go:310] 
	I1209 23:44:19.621766   86928 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 23:44:19.621837   86928 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 23:44:19.621910   86928 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 23:44:19.621923   86928 kubeadm.go:310] 
	I1209 23:44:19.622027   86928 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 23:44:19.622134   86928 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 23:44:19.622146   86928 kubeadm.go:310] 
	I1209 23:44:19.622244   86928 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1dhh9t.u8r2jfyc7htbxy61 \
	I1209 23:44:19.622381   86928 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 \
	I1209 23:44:19.622417   86928 kubeadm.go:310] 	--control-plane 
	I1209 23:44:19.622425   86928 kubeadm.go:310] 
	I1209 23:44:19.622553   86928 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 23:44:19.622572   86928 kubeadm.go:310] 
	I1209 23:44:19.622695   86928 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1dhh9t.u8r2jfyc7htbxy61 \
	I1209 23:44:19.622869   86928 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 
	I1209 23:44:19.622882   86928 cni.go:84] Creating CNI manager for ""
	I1209 23:44:19.622888   86928 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:44:19.624611   86928 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 23:44:19.625877   86928 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 23:44:19.637101   86928 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 23:44:19.657389   86928 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 23:44:19.657507   86928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:19.657521   86928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-327804 minikube.k8s.io/updated_at=2024_12_09T23_44_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=addons-327804 minikube.k8s.io/primary=true
	I1209 23:44:19.673438   86928 ops.go:34] apiserver oom_adj: -16
	I1209 23:44:19.779257   86928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:20.279456   86928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:20.779940   86928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:21.279524   86928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:21.779979   86928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:22.279777   86928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:22.779962   86928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:23.279824   86928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:23.780274   86928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:23.854779   86928 kubeadm.go:1113] duration metric: took 4.197332151s to wait for elevateKubeSystemPrivileges
	I1209 23:44:23.854825   86928 kubeadm.go:394] duration metric: took 14.070033437s to StartCluster
	I1209 23:44:23.854854   86928 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:23.854988   86928 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1209 23:44:23.855559   86928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:23.855785   86928 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 23:44:23.855817   86928 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 23:44:23.855863   86928 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1209 23:44:23.855988   86928 addons.go:69] Setting yakd=true in profile "addons-327804"
	I1209 23:44:23.856010   86928 addons.go:234] Setting addon yakd=true in "addons-327804"
	I1209 23:44:23.856005   86928 addons.go:69] Setting metrics-server=true in profile "addons-327804"
	I1209 23:44:23.856028   86928 addons.go:69] Setting volcano=true in profile "addons-327804"
	I1209 23:44:23.856027   86928 addons.go:69] Setting storage-provisioner=true in profile "addons-327804"
	I1209 23:44:23.856048   86928 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-327804"
	I1209 23:44:23.856053   86928 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-327804"
	I1209 23:44:23.856061   86928 addons.go:69] Setting volumesnapshots=true in profile "addons-327804"
	I1209 23:44:23.856065   86928 addons.go:234] Setting addon storage-provisioner=true in "addons-327804"
	I1209 23:44:23.856072   86928 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-327804"
	I1209 23:44:23.856086   86928 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-327804"
	I1209 23:44:23.856084   86928 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-327804"
	I1209 23:44:23.856098   86928 addons.go:69] Setting registry=true in profile "addons-327804"
	I1209 23:44:23.856110   86928 addons.go:69] Setting ingress=true in profile "addons-327804"
	I1209 23:44:23.856112   86928 addons.go:69] Setting ingress-dns=true in profile "addons-327804"
	I1209 23:44:23.856117   86928 addons.go:234] Setting addon registry=true in "addons-327804"
	I1209 23:44:23.856120   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.856122   86928 addons.go:234] Setting addon ingress=true in "addons-327804"
	I1209 23:44:23.856126   86928 addons.go:234] Setting addon ingress-dns=true in "addons-327804"
	I1209 23:44:23.856139   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.856155   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.856155   86928 addons.go:69] Setting default-storageclass=true in profile "addons-327804"
	I1209 23:44:23.856187   86928 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-327804"
	I1209 23:44:23.856150   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.856560   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.856571   86928 addons.go:69] Setting gcp-auth=true in profile "addons-327804"
	I1209 23:44:23.856586   86928 mustload.go:65] Loading cluster: addons-327804
	I1209 23:44:23.856587   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.856589   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.856605   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.856075   86928 addons.go:234] Setting addon volumesnapshots=true in "addons-327804"
	I1209 23:44:23.856631   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.856632   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.856645   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.856048   86928 addons.go:234] Setting addon volcano=true in "addons-327804"
	I1209 23:44:23.855992   86928 addons.go:69] Setting cloud-spanner=true in profile "addons-327804"
	I1209 23:44:23.856700   86928 addons.go:69] Setting inspektor-gadget=true in profile "addons-327804"
	I1209 23:44:23.856605   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.856713   86928 addons.go:234] Setting addon inspektor-gadget=true in "addons-327804"
	I1209 23:44:23.856700   86928 addons.go:234] Setting addon cloud-spanner=true in "addons-327804"
	I1209 23:44:23.856040   86928 addons.go:234] Setting addon metrics-server=true in "addons-327804"
	I1209 23:44:23.856730   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.856757   86928 config.go:182] Loaded profile config "addons-327804": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:44:23.856102   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.856106   86928 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-327804"
	I1209 23:44:23.856088   86928 config.go:182] Loaded profile config "addons-327804": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:44:23.856019   86928 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-327804"
	I1209 23:44:23.856963   86928 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-327804"
	I1209 23:44:23.856561   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.856987   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.857058   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.856050   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.857080   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.857113   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.857132   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.857148   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.857169   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.857187   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.857203   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.857390   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.857406   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.857412   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.857424   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.857570   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.857706   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.857710   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.857734   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.857876   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.857972   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.858001   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.858155   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.858199   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.858215   86928 out.go:177] * Verifying Kubernetes components...
	I1209 23:44:23.858479   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.859661   86928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:44:23.872292   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43285
	I1209 23:44:23.875068   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.875113   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.875195   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.875212   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.875517   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.875549   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.875911   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.876689   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.876712   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.876816   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44697
	I1209 23:44:23.876954   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40029
	I1209 23:44:23.877141   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.885609   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:23.885707   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.885707   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.886246   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.886263   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.886455   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.886471   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.886890   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.886954   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.887228   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:23.887325   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:23.889735   86928 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-327804"
	I1209 23:44:23.889782   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.890126   86928 addons.go:234] Setting addon default-storageclass=true in "addons-327804"
	I1209 23:44:23.890151   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.890170   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.890185   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.890538   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.890584   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.891220   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.891584   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.891618   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.901644   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32925
	I1209 23:44:23.902150   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.902772   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.902795   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.903383   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.904096   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.904135   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.904700   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46295
	I1209 23:44:23.905067   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.905667   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.905692   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.906097   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.906664   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.906701   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.914227   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41549
	I1209 23:44:23.914623   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.915212   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.915232   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.915611   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.916142   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.916179   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.916528   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44593
	I1209 23:44:23.917012   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.917546   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.917565   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.917631   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41841
	I1209 23:44:23.918103   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.918163   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.918241   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33605
	I1209 23:44:23.918917   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.918958   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.919251   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.919266   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.919322   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34769
	I1209 23:44:23.919599   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.920116   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.920150   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.920364   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.920924   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.920941   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.921424   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.921482   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34755
	I1209 23:44:23.921778   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.922208   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.922240   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.922434   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.922976   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.922995   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.923817   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.923834   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.924153   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.930445   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.930450   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43381
	I1209 23:44:23.930834   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.931355   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.931381   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.931721   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.935272   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41333
	I1209 23:44:23.935618   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.936123   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.936142   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.936212   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42849
	I1209 23:44:23.936739   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.936783   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.937139   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.937159   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.937195   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:23.937510   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.939017   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.939055   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.939059   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.939091   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.939114   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.939153   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.939636   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.939671   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.939845   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38551
	I1209 23:44:23.940637   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.941131   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.941156   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.941466   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.941529   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36239
	I1209 23:44:23.942118   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.942158   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.942873   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.943338   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.943364   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.943705   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.943881   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:23.944611   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33147
	I1209 23:44:23.945105   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.945697   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.945715   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.946219   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.946276   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:23.946406   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:23.948187   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:23.948192   86928 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1209 23:44:23.949289   86928 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1209 23:44:23.949421   86928 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 23:44:23.949437   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1209 23:44:23.949457   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:23.951767   86928 out.go:177]   - Using image docker.io/registry:2.8.3
	I1209 23:44:23.952183   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:23.952594   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:23.952617   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:23.952871   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:23.952875   86928 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1209 23:44:23.952890   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1209 23:44:23.952907   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:23.953013   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:23.953116   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:23.953211   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:23.953847   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46791
	I1209 23:44:23.954384   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.954977   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.955002   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.955339   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.955842   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.955856   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36783
	I1209 23:44:23.955876   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.956363   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.956903   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.956922   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.957355   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.957563   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:23.958295   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:23.959131   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:23.959143   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:23.959169   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:23.959307   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:23.959549   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:23.959752   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:23.960932   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:23.961699   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37095
	I1209 23:44:23.962261   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.962982   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.963000   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.963121   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I1209 23:44:23.963286   86928 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1209 23:44:23.964400   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.964422   86928 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1209 23:44:23.965013   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.965033   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.965437   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.966018   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.966059   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.966264   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42365
	I1209 23:44:23.966472   86928 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1209 23:44:23.966787   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.967393   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.967410   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.967464   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.967700   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:23.968647   86928 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1209 23:44:23.969005   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.969664   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.969705   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.970328   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:23.970875   86928 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1209 23:44:23.971731   86928 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1209 23:44:23.971777   86928 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1209 23:44:23.972912   86928 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 23:44:23.972933   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1209 23:44:23.972952   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:23.973438   86928 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1209 23:44:23.974624   86928 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1209 23:44:23.975739   86928 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1209 23:44:23.975764   86928 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1209 23:44:23.975783   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:23.976227   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:23.976780   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:23.976809   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:23.976944   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:23.977026   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40863
	I1209 23:44:23.977362   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:23.977611   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:23.977842   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:23.978501   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.979067   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.979084   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.979144   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42227
	I1209 23:44:23.979514   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.979905   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:23.979975   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.979986   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.980328   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.980393   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:23.980411   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:23.980444   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:23.981064   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.981107   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.981319   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:23.981358   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.981406   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37197
	I1209 23:44:23.981462   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:23.981580   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:23.981652   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:23.981994   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.982479   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.982501   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.982965   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.983172   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:23.985527   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:23.987363   86928 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1209 23:44:23.988232   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37631
	I1209 23:44:23.988416   86928 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 23:44:23.988429   86928 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 23:44:23.988446   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:23.989650   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:23.989679   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.990399   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.990425   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.990887   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.991112   86928 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1209 23:44:23.991131   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:23.992081   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39713
	I1209 23:44:23.992325   86928 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 23:44:23.992346   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1209 23:44:23.992364   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:23.992411   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:23.992840   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:23.992871   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:23.993057   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:23.993120   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.993353   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:23.993548   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:23.993729   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:23.994255   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.994272   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.994327   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:23.995940   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:23.996034   86928 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1209 23:44:23.996261   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.996325   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:23.996340   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:23.996545   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:23.996879   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40167
	I1209 23:44:23.996904   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:23.997025   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:23.997205   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:23.997482   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.997752   86928 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1209 23:44:23.997768   86928 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1209 23:44:23.997785   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:23.998077   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.998099   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.998475   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:23.998528   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.998949   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42945
	I1209 23:44:23.999289   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:23.999349   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44007
	I1209 23:44:23.999492   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.999719   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:24.000122   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:24.000152   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:24.000480   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:24.000618   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:24.000634   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:24.000693   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:24.000743   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:24.001278   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:24.001656   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:24.002683   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:24.002702   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.002735   86928 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1209 23:44:24.003178   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:24.003216   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.003412   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:24.003529   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:24.003602   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:24.003762   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:24.003880   86928 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:44:24.003993   86928 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 23:44:24.004017   86928 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 23:44:24.004031   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:24.004043   86928 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1209 23:44:24.004052   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1209 23:44:24.004064   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:24.004086   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:24.004556   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:24.005344   86928 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:44:24.005360   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 23:44:24.005378   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:24.005843   86928 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1209 23:44:24.007124   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42167
	I1209 23:44:24.007182   86928 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1209 23:44:24.007197   86928 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1209 23:44:24.007222   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:24.007880   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:24.008378   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41813
	I1209 23:44:24.008559   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:24.008581   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:24.008722   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.008798   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.008841   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45905
	I1209 23:44:24.009096   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:24.009322   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:24.009340   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.009383   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:24.009459   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:24.009859   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:24.009929   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:24.009943   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.010061   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:24.010080   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:24.010157   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:24.010204   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:24.010300   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:24.010308   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:24.010352   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:24.010400   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33507
	I1209 23:44:24.010439   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:24.010926   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:24.011373   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:24.011440   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:24.011480   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:24.011489   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:24.011502   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:24.011566   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:24.011723   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:24.011782   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:24.011963   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:24.012131   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:24.012311   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:24.012446   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:24.012641   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.013263   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.013685   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:24.013703   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.013749   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:24.014061   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:24.014068   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:24.014073   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:24.014443   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:24.014453   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:24.014471   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:24.014479   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:24.014487   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:24.014488   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:24.014744   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:24.014988   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:24.015014   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.015280   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:24.015327   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:24.015327   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:24.015335   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	W1209 23:44:24.015397   86928 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1209 23:44:24.015562   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:24.015562   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:24.015703   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:24.015705   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:24.015846   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:24.015850   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:24.015958   86928 out.go:177]   - Using image docker.io/busybox:stable
	I1209 23:44:24.016007   86928 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1209 23:44:24.016008   86928 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 23:44:24.017425   86928 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1209 23:44:24.017444   86928 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1209 23:44:24.017469   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:24.017470   86928 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 23:44:24.017531   86928 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1209 23:44:24.018801   86928 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1209 23:44:24.018831   86928 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 23:44:24.018845   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1209 23:44:24.018866   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:24.019988   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.020043   86928 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 23:44:24.020062   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1209 23:44:24.020078   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:24.020361   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:24.020387   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.020545   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:24.020697   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:24.020814   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:24.020934   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:24.022425   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.022879   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:24.022906   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.023109   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:24.023252   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:24.023287   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.023401   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:24.023527   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:24.023741   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:24.023758   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.023907   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:24.024062   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:24.024207   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:24.024309   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:24.385405   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1209 23:44:24.425587   86928 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1209 23:44:24.425628   86928 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1209 23:44:24.443975   86928 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1209 23:44:24.444003   86928 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1209 23:44:24.497909   86928 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1209 23:44:24.497938   86928 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1209 23:44:24.513248   86928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:44:24.513367   86928 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 23:44:24.553500   86928 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1209 23:44:24.553527   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1209 23:44:24.576672   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 23:44:24.580288   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 23:44:24.593316   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 23:44:24.595716   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:44:24.598693   86928 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1209 23:44:24.598716   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1209 23:44:24.609390   86928 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 23:44:24.609408   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1209 23:44:24.611226   86928 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1209 23:44:24.611240   86928 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1209 23:44:24.613280   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 23:44:24.615115   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 23:44:24.616544   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 23:44:24.645751   86928 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1209 23:44:24.645772   86928 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1209 23:44:24.747585   86928 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1209 23:44:24.747616   86928 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1209 23:44:24.758366   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1209 23:44:24.769892   86928 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 23:44:24.769909   86928 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 23:44:24.784902   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1209 23:44:24.804978   86928 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1209 23:44:24.804999   86928 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1209 23:44:24.840231   86928 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1209 23:44:24.840258   86928 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1209 23:44:24.956704   86928 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1209 23:44:24.956734   86928 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1209 23:44:24.963585   86928 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1209 23:44:24.963610   86928 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1209 23:44:24.983142   86928 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:44:24.983165   86928 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 23:44:25.028304   86928 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1209 23:44:25.028334   86928 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1209 23:44:25.173249   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:44:25.194747   86928 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1209 23:44:25.194776   86928 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1209 23:44:25.214411   86928 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 23:44:25.214435   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1209 23:44:25.216820   86928 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1209 23:44:25.216838   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1209 23:44:25.423729   86928 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1209 23:44:25.423760   86928 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1209 23:44:25.437459   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 23:44:25.457853   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1209 23:44:25.731266   86928 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1209 23:44:25.731289   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1209 23:44:26.179725   86928 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1209 23:44:26.179763   86928 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1209 23:44:26.463793   86928 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1209 23:44:26.463816   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1209 23:44:26.557953   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.17251362s)
	I1209 23:44:26.557997   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:26.558008   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:26.558323   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:26.558340   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:26.558348   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:26.558354   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:26.558670   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:26.558686   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:26.558720   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:26.686941   86928 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1209 23:44:26.686980   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1209 23:44:26.749689   86928 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.236401378s)
	I1209 23:44:26.749725   86928 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.236316316s)
	I1209 23:44:26.749754   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.17304165s)
	I1209 23:44:26.749754   86928 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1209 23:44:26.749797   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:26.749813   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:26.750139   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:26.750182   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:26.750197   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:26.750206   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:26.750491   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:26.750508   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:26.750798   86928 node_ready.go:35] waiting up to 6m0s for node "addons-327804" to be "Ready" ...
	I1209 23:44:26.768727   86928 node_ready.go:49] node "addons-327804" has status "Ready":"True"
	I1209 23:44:26.768748   86928 node_ready.go:38] duration metric: took 17.924325ms for node "addons-327804" to be "Ready" ...
	I1209 23:44:26.768756   86928 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:44:26.792104   86928 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace to be "Ready" ...
	I1209 23:44:27.100615   86928 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 23:44:27.100652   86928 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1209 23:44:27.257506   86928 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-327804" context rescaled to 1 replicas
	I1209 23:44:27.394984   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 23:44:27.637459   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.057127618s)
	I1209 23:44:27.637544   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:27.637561   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:27.638001   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:27.638012   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:27.638044   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:27.638066   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:27.638080   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:27.638390   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:27.638408   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:28.798788   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:30.826051   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:31.008311   86928 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1209 23:44:31.008367   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:31.012341   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:31.012909   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:31.012939   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:31.013209   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:31.013399   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:31.013557   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:31.013715   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:31.563991   86928 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1209 23:44:31.678979   86928 addons.go:234] Setting addon gcp-auth=true in "addons-327804"
	I1209 23:44:31.679040   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:31.679355   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:31.679400   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:31.694819   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38183
	I1209 23:44:31.695362   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:31.695907   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:31.695934   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:31.696307   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:31.696762   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:31.696811   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:31.712222   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I1209 23:44:31.712708   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:31.713216   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:31.713245   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:31.713574   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:31.713765   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:31.715540   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:31.715760   86928 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1209 23:44:31.715786   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:31.718599   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:31.719076   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:31.719108   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:31.719274   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:31.719452   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:31.719620   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:31.719763   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:32.301404   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.708056236s)
	I1209 23:44:32.301457   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.301473   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.301487   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.705744381s)
	I1209 23:44:32.301529   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.301544   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.301561   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.688261924s)
	I1209 23:44:32.301586   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.301603   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.301668   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.686522911s)
	I1209 23:44:32.301705   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.301722   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.301786   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.301797   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.301805   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.301812   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.301813   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.685247937s)
	I1209 23:44:32.301834   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.301844   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.301843   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.301884   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.301892   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.301899   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.301905   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.301944   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.54355653s)
	I1209 23:44:32.301960   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.301969   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.301983   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.302000   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.302013   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.302021   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.302029   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.302029   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.517102194s)
	I1209 23:44:32.302048   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.302056   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.302093   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.1288127s)
	I1209 23:44:32.302104   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.302114   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.302119   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.302126   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.302139   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.302145   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.302152   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.302157   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.302198   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.302204   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.302213   86928 addons.go:475] Verifying addon ingress=true in "addons-327804"
	I1209 23:44:32.302259   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.864767899s)
	W1209 23:44:32.302287   86928 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 23:44:32.302371   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.844493328s)
	I1209 23:44:32.302396   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.302405   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.302469   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.302492   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.302498   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.302506   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.302512   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.303389   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.303419   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.303426   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.303434   86928 addons.go:475] Verifying addon metrics-server=true in "addons-327804"
	I1209 23:44:32.305109   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.305143   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.305150   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.305271   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.305282   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.305289   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.305303   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.305312   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.305319   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.305394   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.305464   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.305489   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.305495   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.305505   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.305511   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.305690   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.305712   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.305717   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.305825   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.305854   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.305860   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.306283   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.306296   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.306304   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.306311   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.306377   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.306401   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.306408   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.306415   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.306422   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.306594   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.306604   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.307376   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.307387   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.307392   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.307406   86928 addons.go:475] Verifying addon registry=true in "addons-327804"
	I1209 23:44:32.307413   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.307442   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.307450   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.302325   86928 retry.go:31] will retry after 297.02029ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 23:44:32.308218   86928 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-327804 service yakd-dashboard -n yakd-dashboard
	
	I1209 23:44:32.308961   86928 out.go:177] * Verifying registry addon...
	I1209 23:44:32.309829   86928 out.go:177] * Verifying ingress addon...
	I1209 23:44:32.310987   86928 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1209 23:44:32.311612   86928 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1209 23:44:32.323319   86928 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1209 23:44:32.323339   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:32.327050   86928 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1209 23:44:32.327075   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:32.332104   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.332128   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.332217   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.332240   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.332475   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.332491   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	W1209 23:44:32.332613   86928 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1209 23:44:32.332621   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.332658   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.332645   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.605544   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 23:44:32.816424   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:32.817813   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:33.313388   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:33.325991   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:33.326015   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:34.190112   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:34.190836   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:34.326603   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.931538446s)
	I1209 23:44:34.326653   86928 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.610870307s)
	I1209 23:44:34.326668   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:34.326688   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:34.326947   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:34.326970   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:34.326979   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:34.326987   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:34.326993   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:34.327234   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:34.327254   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:34.327265   86928 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-327804"
	I1209 23:44:34.328138   86928 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1209 23:44:34.329025   86928 out.go:177] * Verifying csi-hostpath-driver addon...
	I1209 23:44:34.330503   86928 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 23:44:34.331170   86928 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1209 23:44:34.331642   86928 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1209 23:44:34.331659   86928 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1209 23:44:34.346166   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:34.346475   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:34.346888   86928 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1209 23:44:34.346907   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:34.458396   86928 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1209 23:44:34.458429   86928 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1209 23:44:34.540415   86928 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 23:44:34.540442   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1209 23:44:34.616397   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 23:44:34.628682   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.023075477s)
	I1209 23:44:34.628736   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:34.628754   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:34.629063   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:34.629105   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:34.629129   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:34.629143   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:34.629369   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:34.629384   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:34.629399   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:34.835189   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:34.837120   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:34.844063   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:35.316804   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:35.316978   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:35.336061   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:35.783963   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.167521855s)
	I1209 23:44:35.784022   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:35.784036   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:35.784340   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:35.784386   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:35.784408   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:35.784420   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:35.784428   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:35.784674   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:35.784693   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:35.786524   86928 addons.go:475] Verifying addon gcp-auth=true in "addons-327804"
	I1209 23:44:35.788919   86928 out.go:177] * Verifying gcp-auth addon...
	I1209 23:44:35.790598   86928 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1209 23:44:35.833053   86928 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1209 23:44:35.833078   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:35.835699   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:35.835839   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:35.840815   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:35.857730   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:36.294506   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:36.314395   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:36.316241   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:36.334919   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:36.795203   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:36.815677   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:36.816056   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:36.834952   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:37.300390   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:37.314307   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:37.316947   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:37.336155   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:37.797041   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:37.814235   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:37.816378   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:37.836224   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:38.295029   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:38.299659   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:38.314846   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:38.316943   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:38.337088   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:38.796253   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:38.817221   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:38.818458   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:38.836118   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:39.294433   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:39.315266   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:39.319143   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:39.336923   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:39.818396   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:39.915978   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:39.917074   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:39.917241   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:40.293606   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:40.313486   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:40.315399   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:40.334757   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:40.795019   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:40.797927   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:40.815293   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:40.815674   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:40.839790   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:41.294520   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:41.315785   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:41.315973   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:41.335759   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:41.793768   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:41.815193   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:41.816549   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:41.834524   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:42.294075   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:42.315060   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:42.315631   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:42.335339   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:42.794914   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:42.800847   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:42.813362   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:42.815211   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:42.834912   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:43.295310   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:43.313823   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:43.315793   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:43.335781   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:43.795567   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:43.814368   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:43.815903   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:43.835103   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:44.294796   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:44.313953   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:44.316079   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:44.335739   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:44.795436   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:44.814759   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:44.815851   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:44.834516   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:45.293779   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:45.297821   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:45.315390   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:45.315802   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:45.335924   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:45.794354   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:45.815468   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:45.815663   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:45.834637   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:46.293680   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:46.314676   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:46.315878   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:46.335462   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:47.374018   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:47.374152   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:47.374236   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:47.374469   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:47.374501   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:47.380267   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:47.380408   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:47.380831   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:47.381546   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:47.794457   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:47.815120   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:47.816191   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:47.834817   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:48.296633   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:48.315665   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:48.316362   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:48.335744   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:48.794694   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:48.814933   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:48.817613   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:48.835965   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:49.294686   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:49.316104   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:49.316605   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:49.336005   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:49.795421   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:49.797853   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:49.814835   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:49.815463   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:49.835379   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:50.294938   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:50.316145   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:50.316312   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:50.335377   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:50.794407   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:50.815357   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:50.815569   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:50.836018   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:51.295492   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:51.315669   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:51.315768   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:51.334595   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:51.795630   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:51.797922   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:51.814426   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:51.815206   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:51.834995   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:52.294795   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:52.314740   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:52.315719   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:52.337208   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:52.795624   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:52.816190   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:52.817491   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:52.835286   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:53.293645   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:53.314842   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:53.316593   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:53.338661   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:54.023219   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:54.023321   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:54.024602   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:54.025403   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:54.025949   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:54.293900   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:54.315352   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:54.316072   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:54.337091   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:54.794592   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:54.814879   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:54.815535   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:54.835942   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:55.294204   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:55.315035   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:55.315573   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:55.336550   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:55.793446   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:55.815777   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:55.816098   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:55.835849   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:56.295920   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:56.298205   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:56.315461   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:56.316332   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:56.334806   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:56.796385   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:56.814769   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:56.815310   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:56.834907   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:57.294887   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:57.314045   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:57.315544   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:57.336350   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:57.796452   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:57.813770   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:57.815822   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:57.836353   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:58.293821   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:58.315365   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:58.315491   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:58.336178   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:58.796608   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:58.797938   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:58.815498   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:58.815810   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:58.835460   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:59.294946   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:59.315225   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:59.315549   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:59.334994   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:59.795004   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:59.798249   86928 pod_ready.go:93] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"True"
	I1209 23:44:59.798269   86928 pod_ready.go:82] duration metric: took 33.006139904s for pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace to be "Ready" ...
	I1209 23:44:59.798278   86928 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mv8d4" in "kube-system" namespace to be "Ready" ...
	I1209 23:44:59.800086   86928 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-mv8d4" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-mv8d4" not found
	I1209 23:44:59.800108   86928 pod_ready.go:82] duration metric: took 1.82311ms for pod "coredns-7c65d6cfc9-mv8d4" in "kube-system" namespace to be "Ready" ...
	E1209 23:44:59.800121   86928 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-mv8d4" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-mv8d4" not found
	I1209 23:44:59.800133   86928 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r5t4g" in "kube-system" namespace to be "Ready" ...
	I1209 23:44:59.806876   86928 pod_ready.go:93] pod "coredns-7c65d6cfc9-r5t4g" in "kube-system" namespace has status "Ready":"True"
	I1209 23:44:59.806895   86928 pod_ready.go:82] duration metric: took 6.755668ms for pod "coredns-7c65d6cfc9-r5t4g" in "kube-system" namespace to be "Ready" ...
	I1209 23:44:59.806903   86928 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-327804" in "kube-system" namespace to be "Ready" ...
	I1209 23:44:59.812725   86928 pod_ready.go:93] pod "etcd-addons-327804" in "kube-system" namespace has status "Ready":"True"
	I1209 23:44:59.812748   86928 pod_ready.go:82] duration metric: took 5.837634ms for pod "etcd-addons-327804" in "kube-system" namespace to be "Ready" ...
	I1209 23:44:59.812759   86928 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-327804" in "kube-system" namespace to be "Ready" ...
	I1209 23:44:59.817158   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:59.817499   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:59.819782   86928 pod_ready.go:93] pod "kube-apiserver-addons-327804" in "kube-system" namespace has status "Ready":"True"
	I1209 23:44:59.819801   86928 pod_ready.go:82] duration metric: took 7.033791ms for pod "kube-apiserver-addons-327804" in "kube-system" namespace to be "Ready" ...
	I1209 23:44:59.819813   86928 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-327804" in "kube-system" namespace to be "Ready" ...
	I1209 23:44:59.834758   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:59.996874   86928 pod_ready.go:93] pod "kube-controller-manager-addons-327804" in "kube-system" namespace has status "Ready":"True"
	I1209 23:44:59.996896   86928 pod_ready.go:82] duration metric: took 177.075091ms for pod "kube-controller-manager-addons-327804" in "kube-system" namespace to be "Ready" ...
	I1209 23:44:59.996906   86928 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2cbzc" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:00.295676   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:00.314329   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:00.316534   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:00.337627   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:00.396372   86928 pod_ready.go:93] pod "kube-proxy-2cbzc" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:00.396392   86928 pod_ready.go:82] duration metric: took 399.480869ms for pod "kube-proxy-2cbzc" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:00.396402   86928 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-327804" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:00.795159   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:00.796568   86928 pod_ready.go:93] pod "kube-scheduler-addons-327804" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:00.796588   86928 pod_ready.go:82] duration metric: took 400.179692ms for pod "kube-scheduler-addons-327804" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:00.796598   86928 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4fmgx" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:00.814903   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:00.816724   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:00.835344   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:01.196494   86928 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-4fmgx" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:01.196520   86928 pod_ready.go:82] duration metric: took 399.915118ms for pod "nvidia-device-plugin-daemonset-4fmgx" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:01.196533   86928 pod_ready.go:39] duration metric: took 34.427764911s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:45:01.196555   86928 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:45:01.196619   86928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:45:01.216009   86928 api_server.go:72] duration metric: took 37.360157968s to wait for apiserver process to appear ...
	I1209 23:45:01.216037   86928 api_server.go:88] waiting for apiserver healthz status ...
	I1209 23:45:01.216060   86928 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I1209 23:45:01.220831   86928 api_server.go:279] https://192.168.39.22:8443/healthz returned 200:
	ok
	I1209 23:45:01.221900   86928 api_server.go:141] control plane version: v1.31.2
	I1209 23:45:01.221922   86928 api_server.go:131] duration metric: took 5.879405ms to wait for apiserver health ...
	I1209 23:45:01.221951   86928 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 23:45:01.294011   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:01.315367   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:01.315833   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:01.335097   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:01.403397   86928 system_pods.go:59] 18 kube-system pods found
	I1209 23:45:01.403430   86928 system_pods.go:61] "amd-gpu-device-plugin-pkmlz" [017587ab-2377-4f9e-92e2-218a17992ac4] Running
	I1209 23:45:01.403435   86928 system_pods.go:61] "coredns-7c65d6cfc9-r5t4g" [7a0c206f-316c-4ffb-9211-a965ab776e73] Running
	I1209 23:45:01.403442   86928 system_pods.go:61] "csi-hostpath-attacher-0" [d20aef45-da7a-435c-9074-2b9dc1cd24db] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1209 23:45:01.403448   86928 system_pods.go:61] "csi-hostpath-resizer-0" [23152550-a282-425c-afac-778089918479] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1209 23:45:01.403457   86928 system_pods.go:61] "csi-hostpathplugin-k6r22" [206125d5-90c8-4598-b3aa-f9156187f289] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 23:45:01.403461   86928 system_pods.go:61] "etcd-addons-327804" [d7b1bc10-ad72-4172-8b75-501badde178f] Running
	I1209 23:45:01.403465   86928 system_pods.go:61] "kube-apiserver-addons-327804" [f7a261b7-39ac-450f-842e-dc53e5e91214] Running
	I1209 23:45:01.403468   86928 system_pods.go:61] "kube-controller-manager-addons-327804" [caff5b88-a93a-46f5-9bd1-94d6153a13c8] Running
	I1209 23:45:01.403472   86928 system_pods.go:61] "kube-ingress-dns-minikube" [badf09c8-255f-4cbf-835d-fe1d2cf14471] Running
	I1209 23:45:01.403475   86928 system_pods.go:61] "kube-proxy-2cbzc" [ee54203a-77d6-4367-8ccb-208364419fea] Running
	I1209 23:45:01.403479   86928 system_pods.go:61] "kube-scheduler-addons-327804" [903789aa-d4d6-4348-93c7-55c9823816d6] Running
	I1209 23:45:01.403483   86928 system_pods.go:61] "metrics-server-84c5f94fbc-4d528" [8de05551-49ab-4933-852a-16b88842a109] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 23:45:01.403490   86928 system_pods.go:61] "nvidia-device-plugin-daemonset-4fmgx" [a89eaf64-40a3-4ab2-a394-a852c6a26f53] Running
	I1209 23:45:01.403495   86928 system_pods.go:61] "registry-5cc95cd69-sr6kt" [38920e52-e20a-4542-af24-1efcde928cf7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 23:45:01.403500   86928 system_pods.go:61] "registry-proxy-rft2s" [6ff74e8e-3b66-4249-984f-1c881b667876] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 23:45:01.403508   86928 system_pods.go:61] "snapshot-controller-56fcc65765-7ggrn" [2c529bb9-d4dd-41aa-ae16-5fd1853d334c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 23:45:01.403513   86928 system_pods.go:61] "snapshot-controller-56fcc65765-9ssqt" [6b3d1329-f736-4c18-8da6-a2e60b272146] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 23:45:01.403521   86928 system_pods.go:61] "storage-provisioner" [7f8c8e7e-aef5-4f97-8808-537836392fb1] Running
	I1209 23:45:01.403528   86928 system_pods.go:74] duration metric: took 181.564053ms to wait for pod list to return data ...
	I1209 23:45:01.403538   86928 default_sa.go:34] waiting for default service account to be created ...
	I1209 23:45:01.597069   86928 default_sa.go:45] found service account: "default"
	I1209 23:45:01.597100   86928 default_sa.go:55] duration metric: took 193.55531ms for default service account to be created ...
	I1209 23:45:01.597110   86928 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 23:45:01.794096   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:01.800721   86928 system_pods.go:86] 18 kube-system pods found
	I1209 23:45:01.800745   86928 system_pods.go:89] "amd-gpu-device-plugin-pkmlz" [017587ab-2377-4f9e-92e2-218a17992ac4] Running
	I1209 23:45:01.800751   86928 system_pods.go:89] "coredns-7c65d6cfc9-r5t4g" [7a0c206f-316c-4ffb-9211-a965ab776e73] Running
	I1209 23:45:01.800757   86928 system_pods.go:89] "csi-hostpath-attacher-0" [d20aef45-da7a-435c-9074-2b9dc1cd24db] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1209 23:45:01.800764   86928 system_pods.go:89] "csi-hostpath-resizer-0" [23152550-a282-425c-afac-778089918479] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1209 23:45:01.800771   86928 system_pods.go:89] "csi-hostpathplugin-k6r22" [206125d5-90c8-4598-b3aa-f9156187f289] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 23:45:01.800776   86928 system_pods.go:89] "etcd-addons-327804" [d7b1bc10-ad72-4172-8b75-501badde178f] Running
	I1209 23:45:01.800780   86928 system_pods.go:89] "kube-apiserver-addons-327804" [f7a261b7-39ac-450f-842e-dc53e5e91214] Running
	I1209 23:45:01.800783   86928 system_pods.go:89] "kube-controller-manager-addons-327804" [caff5b88-a93a-46f5-9bd1-94d6153a13c8] Running
	I1209 23:45:01.800788   86928 system_pods.go:89] "kube-ingress-dns-minikube" [badf09c8-255f-4cbf-835d-fe1d2cf14471] Running
	I1209 23:45:01.800791   86928 system_pods.go:89] "kube-proxy-2cbzc" [ee54203a-77d6-4367-8ccb-208364419fea] Running
	I1209 23:45:01.800794   86928 system_pods.go:89] "kube-scheduler-addons-327804" [903789aa-d4d6-4348-93c7-55c9823816d6] Running
	I1209 23:45:01.800801   86928 system_pods.go:89] "metrics-server-84c5f94fbc-4d528" [8de05551-49ab-4933-852a-16b88842a109] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 23:45:01.800805   86928 system_pods.go:89] "nvidia-device-plugin-daemonset-4fmgx" [a89eaf64-40a3-4ab2-a394-a852c6a26f53] Running
	I1209 23:45:01.800810   86928 system_pods.go:89] "registry-5cc95cd69-sr6kt" [38920e52-e20a-4542-af24-1efcde928cf7] Running
	I1209 23:45:01.800815   86928 system_pods.go:89] "registry-proxy-rft2s" [6ff74e8e-3b66-4249-984f-1c881b667876] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 23:45:01.800824   86928 system_pods.go:89] "snapshot-controller-56fcc65765-7ggrn" [2c529bb9-d4dd-41aa-ae16-5fd1853d334c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 23:45:01.800830   86928 system_pods.go:89] "snapshot-controller-56fcc65765-9ssqt" [6b3d1329-f736-4c18-8da6-a2e60b272146] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 23:45:01.800834   86928 system_pods.go:89] "storage-provisioner" [7f8c8e7e-aef5-4f97-8808-537836392fb1] Running
	I1209 23:45:01.800842   86928 system_pods.go:126] duration metric: took 203.725819ms to wait for k8s-apps to be running ...
	I1209 23:45:01.800852   86928 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 23:45:01.800896   86928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 23:45:01.814682   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:01.815599   86928 system_svc.go:56] duration metric: took 14.735237ms WaitForService to wait for kubelet
	I1209 23:45:01.815625   86928 kubeadm.go:582] duration metric: took 37.959779657s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:45:01.815650   86928 node_conditions.go:102] verifying NodePressure condition ...
	I1209 23:45:01.816510   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:01.834999   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:01.996649   86928 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 23:45:01.996681   86928 node_conditions.go:123] node cpu capacity is 2
	I1209 23:45:01.996699   86928 node_conditions.go:105] duration metric: took 181.042355ms to run NodePressure ...
	I1209 23:45:01.996714   86928 start.go:241] waiting for startup goroutines ...
	I1209 23:45:02.299689   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:02.314241   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:02.314875   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:02.335141   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:02.793968   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:02.814653   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:02.814938   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:02.837603   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:03.293934   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:03.315295   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:03.315812   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:03.335557   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:03.794619   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:03.817112   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:03.817522   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:03.837271   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:04.296062   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:04.315519   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:04.317188   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:04.335957   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:04.793996   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:04.815270   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:04.817154   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:04.834971   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:05.294807   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:05.314881   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:05.315089   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:05.334337   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:05.793598   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:05.815868   86928 kapi.go:107] duration metric: took 33.504877747s to wait for kubernetes.io/minikube-addons=registry ...
	I1209 23:45:05.816151   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:05.834902   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:06.295066   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:06.315596   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:06.337679   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:06.796522   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:06.819378   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:06.835618   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:07.294539   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:07.316020   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:07.334800   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:07.795957   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:07.814807   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:07.898177   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:08.294969   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:08.315015   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:08.334677   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:08.794602   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:08.815785   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:08.835449   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:09.294347   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:09.315610   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:09.335913   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:09.794874   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:09.816279   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:09.836602   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:10.293962   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:10.316088   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:10.336950   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:10.794850   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:10.815333   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:10.834812   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:11.293947   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:11.314864   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:11.336551   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:11.793951   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:11.815074   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:11.835169   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:12.294157   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:12.316025   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:12.334999   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:12.793537   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:12.816052   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:12.835220   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:13.294847   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:13.316349   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:13.530869   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:13.794199   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:13.815680   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:13.834887   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:14.294024   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:14.316467   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:14.335312   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:14.796494   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:14.818528   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:14.835913   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:15.315651   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:15.323527   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:15.358961   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:15.797719   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:15.816957   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:15.837099   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:16.295272   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:16.315412   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:16.396362   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:16.794170   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:16.822155   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:16.896707   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:17.293748   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:17.315802   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:17.335214   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:17.793616   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:17.816767   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:17.835654   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:18.295013   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:18.315495   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:18.335520   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:18.794139   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:18.815609   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:18.836865   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:19.294649   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:19.316145   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:19.334809   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:19.794462   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:19.815508   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:19.835295   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:20.295056   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:20.316527   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:20.338047   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:20.806853   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:20.815561   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:20.835205   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:21.294770   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:21.315980   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:21.334663   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:21.794777   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:21.816338   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:21.836230   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:22.294412   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:22.315702   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:22.335447   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:22.794628   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:22.815620   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:22.835361   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:23.293650   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:23.395283   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:23.395329   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:23.793877   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:23.815808   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:23.835409   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:24.293918   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:24.315860   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:24.335412   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:24.793839   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:24.815245   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:24.898032   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:25.294451   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:25.315704   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:25.335455   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:25.793836   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:25.816582   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:25.835669   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:26.628627   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:26.632813   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:26.634013   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:26.794979   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:26.896397   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:26.896487   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:27.293741   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:27.315766   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:27.335760   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:27.794529   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:27.815334   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:27.835041   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:28.293376   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:28.315301   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:28.335265   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:28.794052   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:28.814858   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:28.835666   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:29.294783   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:29.316351   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:29.335060   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:29.794176   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:29.815194   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:29.835926   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:30.298179   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:30.315086   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:30.335676   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:30.795710   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:30.816332   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:30.834980   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:31.295094   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:31.315096   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:31.334846   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:31.794579   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:31.815733   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:31.836372   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:32.294789   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:32.316068   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:32.335169   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:32.794681   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:32.819177   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:32.835923   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:33.294724   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:33.315705   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:33.335150   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:33.794029   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:33.815072   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:33.834873   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:34.294181   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:34.315479   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:34.335970   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:34.794208   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:34.815257   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:34.835318   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:35.295096   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:35.317426   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:35.336908   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:35.794508   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:35.816087   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:35.835432   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:36.294021   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:36.315872   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:36.335684   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:36.794283   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:36.817651   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:36.837393   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:37.295633   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:37.324093   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:37.344997   86928 kapi.go:107] duration metric: took 1m3.013818607s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1209 23:45:37.794111   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:37.815097   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:38.295498   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:38.316112   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:39.021212   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:39.021614   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:39.295872   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:39.316685   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:39.793930   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:39.816192   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:40.297315   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:40.316149   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:40.795086   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:40.817401   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:41.386094   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:41.386351   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:41.793999   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:41.815657   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:42.294345   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:42.315625   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:42.795487   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:42.816258   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:43.295433   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:43.315734   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:43.795264   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:43.816629   86928 kapi.go:107] duration metric: took 1m11.505013998s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1209 23:45:44.294398   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:44.796425   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:45.294346   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:45.794877   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:46.295123   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:46.794849   86928 kapi.go:107] duration metric: took 1m11.004245607s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1209 23:45:46.796475   86928 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-327804 cluster.
	I1209 23:45:46.797718   86928 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1209 23:45:46.798940   86928 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1209 23:45:46.800215   86928 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, metrics-server, amd-gpu-device-plugin, storage-provisioner, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1209 23:45:46.801498   86928 addons.go:510] duration metric: took 1m22.945635939s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns metrics-server amd-gpu-device-plugin storage-provisioner inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1209 23:45:46.801533   86928 start.go:246] waiting for cluster config update ...
	I1209 23:45:46.801550   86928 start.go:255] writing updated cluster config ...
	I1209 23:45:46.801794   86928 ssh_runner.go:195] Run: rm -f paused
	I1209 23:45:46.851079   86928 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 23:45:46.852694   86928 out.go:177] * Done! kubectl is now configured to use "addons-327804" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 23:49:23 addons-327804 crio[666]: time="2024-12-09 23:49:23.407981775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=495c66b8-de27-49e9-961e-cc24c6e8b1f5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:49:23 addons-327804 crio[666]: time="2024-12-09 23:49:23.408287592Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:276e11c8f6782e380c0f486412c268839f8233a540e9b2d467396ac652bf4a47,PodSandboxId:1668e155efb26586b8750b1e2ba60d8222c62672828d561f3dfd47f301131591,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733788026565376632,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 52fd3c65-4d51-4779-8a7a-3c2bcae19f57,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0abf32bbf73a9a16d1843e074dfd0c7e9b3b75c0cdfbda53d3f27c3896034112,PodSandboxId:b308b70e914fa946f03d1ed30379ad4cb26beb6132a443225eae71281957ff6a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733787950077364680,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c2cba33-a47e-457a-a491-52d554257a4e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8905753fb22ec879ab5a48ac61a2c15f0f50691150631f5272c38c8bfe8232c,PodSandboxId:4d133872fd1f62e5034823951b387e4991bb6bd226224d43fa501f9d0801c429,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733787942764685846,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-92n4g,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e5901e18-c581-4028-804d-00d055489682,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:91aef11f98aaef9f3ca637f9abd6e1c9cbd5605d5c072b63cb2e8b0853109fb5,PodSandboxId:1200a38baed2e5b874e1d807f159a03adc98f4d6721e9ed93f3e446e6d37da0c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733787924876032750,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h5nmw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 244a35ac-ea1d-493b-bab2-daa20295e97f,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8896c4718132bfbee03c98f8bfd5fbef163a3eccf4a24020a85d562f52703b5d,PodSandboxId:d4b2c73c457d4f846efbbb93787469f909e54531619086532b0553f59dbcc445,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733787924744294429,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lrjq2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3dcbd3a2-003d-4845-9f96-ce47cd659e31,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47f74f33f56dd8f631e8b7c34e226d3a390572b438ecd5317752269f2b712956,PodSandboxId:4b615de9a52a5bd05ad93fde9178c4d50b960c7c680cef54ca44dd821afca585,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733787916333196322,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4d528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de05551-49ab-4933-852a-16b88842a109,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f092f3706f9b2467e609b06256d3f4b093f0d58624e3a44eb7a493316bfd49b,PodSandboxId:b0d0cf3c6c6d71aef972962da305b287bb681c9cca0fa0bc38a17ad55fc96adc,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Ima
ge:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733787907495254837,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-zwvjn,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3bfe6e8f-f3f4-41af-b636-360335e84680,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1e0ef4d0d0ab2293f66afe89c73d4d2885098538a0d3f2291228119b05e0ef,PodSandboxId:464b08afec1f7c2828afe1d7006233cf00a024bfa2271fe41226e10c1a6d1b27,Me
tadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733787898767395853,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pkmlz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 017587ab-2377-4f9e-92e2-218a17992ac4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6cc4394a4540faa405713f14a76705c38868b7adefc33c4d362ff13d288e84f,PodSandboxId:f39a16702c280a3d591a99c7fa5
c3d2db52eba70ccbc122d201dcbba575ff2c5,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733787879806934622,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badf09c8-255f-4cbf-835d-fe1d2cf14471,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:477d0ec756e0a69860c40ffccc988a97d8bb376ef42c03ae36978781110b8bc3,PodSandboxId:6b6250eeaa11fd27ac90ace35489a6e879d9a27d23c697133c0b4d2100f754df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733787870047044732,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8c8e7e-aef5-4f97-8808-537836392fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 3
0,},},&Container{Id:e092c5623388adf51a509866b9b3eb75beb44708feeed26e6c48bebab630f293,PodSandboxId:142b2695e8e20ba3a81a8b11d079289fadeca3e70b91d93bd87beee09a786858,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733787867553987881,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r5t4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0c206f-316c-4ffb-9211-a965ab776e73,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernete
s.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c12f7a2107cd71062a09726bd3d76c5f44f89c88e8a790971efc9478227e5c4,PodSandboxId:1b8cea9b8d2c3a7150c3d02d442ba36053545bda7c03fb9d8d49b83a88fb1637,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733787865085046141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2cbzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee54203a-77d6-4367-8ccb-208364419fea,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d77a9f595d88a18bedce788dd2a66f58c67b3954f5dbd5ee5343fbecea91cc1,PodSandboxId:d2dfcc30b6ae4dc7bfbf6684d3099dcce8a0a8dd269cf23dae23630762e06eb4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733787853899008999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b824167c258264e67ae998070ea377e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273b5817c8ec56556f17299edd5c8b59d06c0b65fc84c875c837d9afc2dfa8cb,PodSandboxId:b177a2183d7b3a2d09b7f2101dd94f4860db7c15af5da024a07e4f1d7e485878,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733787853887280251,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f6d6360675bee157f47d84a79c68be5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6063e15fb3524d96dc672d687d4ca98c38148f7ee9be778dc199f39e6c8d3272,PodSandboxId:1f3ad20dfd95fd6970be0e1821c8c4406fc7974f3e36acb6fb9eaac28abde1ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733787853901231415,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a22f91c377887de075e795aacdcfeb14,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b886b264255fd9fe80ac49b1aca8ef8200cdaa6c5c343eb5887ac2f9f67978bf,PodSandboxId:5c712051047779633d5ea786900384f55c87e676c0621e153cc1fa2642df587f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733787853893272511,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7898fb83e756cb65e3a9035b190f7aee,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=495c66b8-de27-49e9-961e-cc24c6e8b1f5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:49:23 addons-327804 crio[666]: time="2024-12-09 23:49:23.409168967Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: 9388cfab-df21-4794-9e5f-bfb3d41b1b70,},},}" file="otel-collector/interceptors.go:62" id=18ba067f-9455-44b6-954c-11840fb32489 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 09 23:49:23 addons-327804 crio[666]: time="2024-12-09 23:49:23.409257885Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:2bad02ebfdcd822650359b23afc7c26484f38015b51b81c2a01e66ac35213866,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-bc72w,Uid:9388cfab-df21-4794-9e5f-bfb3d41b1b70,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733788162497689040,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bc72w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9388cfab-df21-4794-9e5f-bfb3d41b1b70,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-09T23:49:22.179455430Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=18ba067f-9455-44b6-954c-11840fb32489 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 09 23:49:23 addons-327804 crio[666]: time="2024-12-09 23:49:23.409571841Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:2bad02ebfdcd822650359b23afc7c26484f38015b51b81c2a01e66ac35213866,Verbose:false,}" file="otel-collector/interceptors.go:62" id=430746bf-140e-4907-8bff-9c58cc3249d6 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Dec 09 23:49:23 addons-327804 crio[666]: time="2024-12-09 23:49:23.409654743Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:2bad02ebfdcd822650359b23afc7c26484f38015b51b81c2a01e66ac35213866,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-bc72w,Uid:9388cfab-df21-4794-9e5f-bfb3d41b1b70,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733788162497689040,Network:&PodSandboxNetworkStatus{Ip:10.244.0.33,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},},},Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bc72w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9388cfab-df21-4794-9e5f-bfb3d41b1b70,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-09T23:49:22.179455430Z,kubernetes.io/config.source:
api,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=430746bf-140e-4907-8bff-9c58cc3249d6 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Dec 09 23:49:23 addons-327804 crio[666]: time="2024-12-09 23:49:23.410047872Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 9388cfab-df21-4794-9e5f-bfb3d41b1b70,},},}" file="otel-collector/interceptors.go:62" id=56d846c9-a937-47d9-9462-8173031cf7ff name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:49:23 addons-327804 crio[666]: time="2024-12-09 23:49:23.410090992Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=56d846c9-a937-47d9-9462-8173031cf7ff name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:49:23 addons-327804 crio[666]: time="2024-12-09 23:49:23.410134123Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=56d846c9-a937-47d9-9462-8173031cf7ff name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:49:23 addons-327804 crio[666]: time="2024-12-09 23:49:23.420605860Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=38af5f03-cb9a-49c1-825f-e19218605b1c name=/runtime.v1.RuntimeService/Version
	Dec 09 23:49:23 addons-327804 crio[666]: time="2024-12-09 23:49:23.420667230Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=38af5f03-cb9a-49c1-825f-e19218605b1c name=/runtime.v1.RuntimeService/Version
	Dec 09 23:49:23 addons-327804 crio[666]: time="2024-12-09 23:49:23.423061550Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e14312d7-d084-4b49-b321-d3f971e90c64 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 23:49:23 addons-327804 crio[666]: time="2024-12-09 23:49:23.424643346Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788163424605251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e14312d7-d084-4b49-b321-d3f971e90c64 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 23:49:23 addons-327804 crio[666]: time="2024-12-09 23:49:23.425158672Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=73c16c93-4fda-453d-8f1b-703cda4185f5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:49:23 addons-327804 crio[666]: time="2024-12-09 23:49:23.425202645Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=73c16c93-4fda-453d-8f1b-703cda4185f5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:49:23 addons-327804 crio[666]: time="2024-12-09 23:49:23.425511523Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:276e11c8f6782e380c0f486412c268839f8233a540e9b2d467396ac652bf4a47,PodSandboxId:1668e155efb26586b8750b1e2ba60d8222c62672828d561f3dfd47f301131591,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733788026565376632,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 52fd3c65-4d51-4779-8a7a-3c2bcae19f57,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0abf32bbf73a9a16d1843e074dfd0c7e9b3b75c0cdfbda53d3f27c3896034112,PodSandboxId:b308b70e914fa946f03d1ed30379ad4cb26beb6132a443225eae71281957ff6a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733787950077364680,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c2cba33-a47e-457a-a491-52d554257a4e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8905753fb22ec879ab5a48ac61a2c15f0f50691150631f5272c38c8bfe8232c,PodSandboxId:4d133872fd1f62e5034823951b387e4991bb6bd226224d43fa501f9d0801c429,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733787942764685846,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-92n4g,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e5901e18-c581-4028-804d-00d055489682,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:91aef11f98aaef9f3ca637f9abd6e1c9cbd5605d5c072b63cb2e8b0853109fb5,PodSandboxId:1200a38baed2e5b874e1d807f159a03adc98f4d6721e9ed93f3e446e6d37da0c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733787924876032750,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h5nmw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 244a35ac-ea1d-493b-bab2-daa20295e97f,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8896c4718132bfbee03c98f8bfd5fbef163a3eccf4a24020a85d562f52703b5d,PodSandboxId:d4b2c73c457d4f846efbbb93787469f909e54531619086532b0553f59dbcc445,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733787924744294429,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lrjq2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3dcbd3a2-003d-4845-9f96-ce47cd659e31,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47f74f33f56dd8f631e8b7c34e226d3a390572b438ecd5317752269f2b712956,PodSandboxId:4b615de9a52a5bd05ad93fde9178c4d50b960c7c680cef54ca44dd821afca585,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733787916333196322,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4d528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de05551-49ab-4933-852a-16b88842a109,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f092f3706f9b2467e609b06256d3f4b093f0d58624e3a44eb7a493316bfd49b,PodSandboxId:b0d0cf3c6c6d71aef972962da305b287bb681c9cca0fa0bc38a17ad55fc96adc,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Ima
ge:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733787907495254837,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-zwvjn,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3bfe6e8f-f3f4-41af-b636-360335e84680,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1e0ef4d0d0ab2293f66afe89c73d4d2885098538a0d3f2291228119b05e0ef,PodSandboxId:464b08afec1f7c2828afe1d7006233cf00a024bfa2271fe41226e10c1a6d1b27,Me
tadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733787898767395853,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pkmlz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 017587ab-2377-4f9e-92e2-218a17992ac4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6cc4394a4540faa405713f14a76705c38868b7adefc33c4d362ff13d288e84f,PodSandboxId:f39a16702c280a3d591a99c7fa5
c3d2db52eba70ccbc122d201dcbba575ff2c5,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733787879806934622,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badf09c8-255f-4cbf-835d-fe1d2cf14471,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:477d0ec756e0a69860c40ffccc988a97d8bb376ef42c03ae36978781110b8bc3,PodSandboxId:6b6250eeaa11fd27ac90ace35489a6e879d9a27d23c697133c0b4d2100f754df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733787870047044732,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8c8e7e-aef5-4f97-8808-537836392fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 3
0,},},&Container{Id:e092c5623388adf51a509866b9b3eb75beb44708feeed26e6c48bebab630f293,PodSandboxId:142b2695e8e20ba3a81a8b11d079289fadeca3e70b91d93bd87beee09a786858,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733787867553987881,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r5t4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0c206f-316c-4ffb-9211-a965ab776e73,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernete
s.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c12f7a2107cd71062a09726bd3d76c5f44f89c88e8a790971efc9478227e5c4,PodSandboxId:1b8cea9b8d2c3a7150c3d02d442ba36053545bda7c03fb9d8d49b83a88fb1637,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733787865085046141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2cbzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee54203a-77d6-4367-8ccb-208364419fea,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d77a9f595d88a18bedce788dd2a66f58c67b3954f5dbd5ee5343fbecea91cc1,PodSandboxId:d2dfcc30b6ae4dc7bfbf6684d3099dcce8a0a8dd269cf23dae23630762e06eb4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733787853899008999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b824167c258264e67ae998070ea377e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273b5817c8ec56556f17299edd5c8b59d06c0b65fc84c875c837d9afc2dfa8cb,PodSandboxId:b177a2183d7b3a2d09b7f2101dd94f4860db7c15af5da024a07e4f1d7e485878,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733787853887280251,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f6d6360675bee157f47d84a79c68be5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6063e15fb3524d96dc672d687d4ca98c38148f7ee9be778dc199f39e6c8d3272,PodSandboxId:1f3ad20dfd95fd6970be0e1821c8c4406fc7974f3e36acb6fb9eaac28abde1ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733787853901231415,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a22f91c377887de075e795aacdcfeb14,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b886b264255fd9fe80ac49b1aca8ef8200cdaa6c5c343eb5887ac2f9f67978bf,PodSandboxId:5c712051047779633d5ea786900384f55c87e676c0621e153cc1fa2642df587f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733787853893272511,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7898fb83e756cb65e3a9035b190f7aee,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=73c16c93-4fda-453d-8f1b-703cda4185f5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:49:23 addons-327804 crio[666]: time="2024-12-09 23:49:23.445549995Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Dec 09 23:49:23 addons-327804 crio[666]: time="2024-12-09 23:49:23.445796158Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	Dec 09 23:49:23 addons-327804 crio[666]: time="2024-12-09 23:49:23.461507688Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e743d6e5-6a59-4b21-b382-e801c5612428 name=/runtime.v1.RuntimeService/Version
	Dec 09 23:49:23 addons-327804 crio[666]: time="2024-12-09 23:49:23.461569729Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e743d6e5-6a59-4b21-b382-e801c5612428 name=/runtime.v1.RuntimeService/Version
	Dec 09 23:49:23 addons-327804 crio[666]: time="2024-12-09 23:49:23.463002967Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=278fac09-b60b-45a1-8200-d287fde5005e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 23:49:23 addons-327804 crio[666]: time="2024-12-09 23:49:23.464158623Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788163464135850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=278fac09-b60b-45a1-8200-d287fde5005e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 23:49:23 addons-327804 crio[666]: time="2024-12-09 23:49:23.464722396Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=79638074-f47d-4cbd-8477-ee38910ee6cb name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:49:23 addons-327804 crio[666]: time="2024-12-09 23:49:23.464833089Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=79638074-f47d-4cbd-8477-ee38910ee6cb name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:49:23 addons-327804 crio[666]: time="2024-12-09 23:49:23.465568250Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:276e11c8f6782e380c0f486412c268839f8233a540e9b2d467396ac652bf4a47,PodSandboxId:1668e155efb26586b8750b1e2ba60d8222c62672828d561f3dfd47f301131591,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733788026565376632,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 52fd3c65-4d51-4779-8a7a-3c2bcae19f57,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0abf32bbf73a9a16d1843e074dfd0c7e9b3b75c0cdfbda53d3f27c3896034112,PodSandboxId:b308b70e914fa946f03d1ed30379ad4cb26beb6132a443225eae71281957ff6a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733787950077364680,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c2cba33-a47e-457a-a491-52d554257a4e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8905753fb22ec879ab5a48ac61a2c15f0f50691150631f5272c38c8bfe8232c,PodSandboxId:4d133872fd1f62e5034823951b387e4991bb6bd226224d43fa501f9d0801c429,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733787942764685846,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-92n4g,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e5901e18-c581-4028-804d-00d055489682,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:91aef11f98aaef9f3ca637f9abd6e1c9cbd5605d5c072b63cb2e8b0853109fb5,PodSandboxId:1200a38baed2e5b874e1d807f159a03adc98f4d6721e9ed93f3e446e6d37da0c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733787924876032750,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h5nmw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 244a35ac-ea1d-493b-bab2-daa20295e97f,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8896c4718132bfbee03c98f8bfd5fbef163a3eccf4a24020a85d562f52703b5d,PodSandboxId:d4b2c73c457d4f846efbbb93787469f909e54531619086532b0553f59dbcc445,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733787924744294429,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lrjq2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3dcbd3a2-003d-4845-9f96-ce47cd659e31,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47f74f33f56dd8f631e8b7c34e226d3a390572b438ecd5317752269f2b712956,PodSandboxId:4b615de9a52a5bd05ad93fde9178c4d50b960c7c680cef54ca44dd821afca585,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733787916333196322,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4d528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8de05551-49ab-4933-852a-16b88842a109,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f092f3706f9b2467e609b06256d3f4b093f0d58624e3a44eb7a493316bfd49b,PodSandboxId:b0d0cf3c6c6d71aef972962da305b287bb681c9cca0fa0bc38a17ad55fc96adc,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Ima
ge:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733787907495254837,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-zwvjn,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3bfe6e8f-f3f4-41af-b636-360335e84680,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1e0ef4d0d0ab2293f66afe89c73d4d2885098538a0d3f2291228119b05e0ef,PodSandboxId:464b08afec1f7c2828afe1d7006233cf00a024bfa2271fe41226e10c1a6d1b27,Me
tadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733787898767395853,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pkmlz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 017587ab-2377-4f9e-92e2-218a17992ac4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6cc4394a4540faa405713f14a76705c38868b7adefc33c4d362ff13d288e84f,PodSandboxId:f39a16702c280a3d591a99c7fa5
c3d2db52eba70ccbc122d201dcbba575ff2c5,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733787879806934622,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: badf09c8-255f-4cbf-835d-fe1d2cf14471,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:477d0ec756e0a69860c40ffccc988a97d8bb376ef42c03ae36978781110b8bc3,PodSandboxId:6b6250eeaa11fd27ac90ace35489a6e879d9a27d23c697133c0b4d2100f754df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733787870047044732,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8c8e7e-aef5-4f97-8808-537836392fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 3
0,},},&Container{Id:e092c5623388adf51a509866b9b3eb75beb44708feeed26e6c48bebab630f293,PodSandboxId:142b2695e8e20ba3a81a8b11d079289fadeca3e70b91d93bd87beee09a786858,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733787867553987881,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r5t4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0c206f-316c-4ffb-9211-a965ab776e73,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernete
s.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c12f7a2107cd71062a09726bd3d76c5f44f89c88e8a790971efc9478227e5c4,PodSandboxId:1b8cea9b8d2c3a7150c3d02d442ba36053545bda7c03fb9d8d49b83a88fb1637,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733787865085046141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2cbzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee54203a-77d6-4367-8ccb-208364419fea,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.ku
bernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d77a9f595d88a18bedce788dd2a66f58c67b3954f5dbd5ee5343fbecea91cc1,PodSandboxId:d2dfcc30b6ae4dc7bfbf6684d3099dcce8a0a8dd269cf23dae23630762e06eb4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733787853899008999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b824167c258264e67ae998070ea377e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273b5817c8ec56556f17299edd5c8b59d06c0b65fc84c875c837d9afc2dfa8cb,PodSandboxId:b177a2183d7b3a2d09b7f2101dd94f4860db7c15af5da024a07e4f1d7e485878,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733787853887280251,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f6d6360675bee157f47d84a79c68be5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6063e15fb3524d96dc672d687d4ca98c38148f7ee9be778dc199f39e6c8d3272,PodSandboxId:1f3ad20dfd95fd6970be0e1821c8c4406fc7974f3e36acb6fb9eaac28abde1ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733787853901231415,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a22f91c377887de075e795aacdcfeb14,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b886b264255fd9fe80ac49b1aca8ef8200cdaa6c5c343eb5887ac2f9f67978bf,PodSandboxId:5c712051047779633d5ea786900384f55c87e676c0621e153cc1fa2642df587f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733787853893272511,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7898fb83e756cb65e3a9035b190f7aee,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=79638074-f47d-4cbd-8477-ee38910ee6cb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	276e11c8f6782       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                              2 minutes ago       Running             nginx                     0                   1668e155efb26       nginx
	0abf32bbf73a9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   b308b70e914fa       busybox
	e8905753fb22e       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   4d133872fd1f6       ingress-nginx-controller-5f85ff4588-92n4g
	91aef11f98aae       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              patch                     0                   1200a38baed2e       ingress-nginx-admission-patch-h5nmw
	8896c4718132b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   d4b2c73c457d4       ingress-nginx-admission-create-lrjq2
	47f74f33f56dd       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        4 minutes ago       Running             metrics-server            0                   4b615de9a52a5       metrics-server-84c5f94fbc-4d528
	0f092f3706f9b       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   b0d0cf3c6c6d7       local-path-provisioner-86d989889c-zwvjn
	aa1e0ef4d0d0a       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   464b08afec1f7       amd-gpu-device-plugin-pkmlz
	a6cc4394a4540       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   f39a16702c280       kube-ingress-dns-minikube
	477d0ec756e0a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   6b6250eeaa11f       storage-provisioner
	e092c5623388a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   142b2695e8e20       coredns-7c65d6cfc9-r5t4g
	4c12f7a2107cd       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             4 minutes ago       Running             kube-proxy                0                   1b8cea9b8d2c3       kube-proxy-2cbzc
	6063e15fb3524       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             5 minutes ago       Running             kube-controller-manager   0                   1f3ad20dfd95f       kube-controller-manager-addons-327804
	1d77a9f595d88       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   d2dfcc30b6ae4       etcd-addons-327804
	b886b264255fd       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             5 minutes ago       Running             kube-apiserver            0                   5c71205104777       kube-apiserver-addons-327804
	273b5817c8ec5       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             5 minutes ago       Running             kube-scheduler            0                   b177a2183d7b3       kube-scheduler-addons-327804
	
	
	==> coredns [e092c5623388adf51a509866b9b3eb75beb44708feeed26e6c48bebab630f293] <==
	[INFO] 10.244.0.8:39847 - 44265 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000126343s
	[INFO] 10.244.0.8:39847 - 50937 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000084926s
	[INFO] 10.244.0.8:39847 - 60450 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000081529s
	[INFO] 10.244.0.8:39847 - 23815 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000063565s
	[INFO] 10.244.0.8:39847 - 49305 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000066675s
	[INFO] 10.244.0.8:39847 - 17786 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000101388s
	[INFO] 10.244.0.8:39847 - 27226 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000136202s
	[INFO] 10.244.0.8:46331 - 24415 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000077065s
	[INFO] 10.244.0.8:46331 - 24119 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000233415s
	[INFO] 10.244.0.8:56146 - 49800 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066016s
	[INFO] 10.244.0.8:56146 - 49599 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000145589s
	[INFO] 10.244.0.8:45048 - 40252 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059267s
	[INFO] 10.244.0.8:45048 - 40490 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000087793s
	[INFO] 10.244.0.8:35937 - 42568 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000048881s
	[INFO] 10.244.0.8:35937 - 42394 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000221866s
	[INFO] 10.244.0.23:37962 - 14905 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000396228s
	[INFO] 10.244.0.23:41467 - 34640 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000163726s
	[INFO] 10.244.0.23:56127 - 51458 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000128993s
	[INFO] 10.244.0.23:45343 - 22251 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000074419s
	[INFO] 10.244.0.23:33233 - 21841 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000115749s
	[INFO] 10.244.0.23:58450 - 6353 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000160672s
	[INFO] 10.244.0.23:58052 - 18001 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001135563s
	[INFO] 10.244.0.23:35618 - 45226 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001269652s
	[INFO] 10.244.0.26:49096 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000414189s
	[INFO] 10.244.0.26:45460 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000175412s
	
	
	==> describe nodes <==
	Name:               addons-327804
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-327804
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=addons-327804
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T23_44_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-327804
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 23:44:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-327804
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 23:49:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 23:47:23 +0000   Mon, 09 Dec 2024 23:44:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 23:47:23 +0000   Mon, 09 Dec 2024 23:44:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 23:47:23 +0000   Mon, 09 Dec 2024 23:44:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 23:47:23 +0000   Mon, 09 Dec 2024 23:44:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    addons-327804
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 a88e00e239984a4881f4ee141420868c
	  System UUID:                a88e00e2-3998-4a48-81f4-ee141420868c
	  Boot ID:                    5ecd71d7-fc05-46ad-bf4f-2a572fc8b0b9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m36s
	  default                     hello-world-app-55bf9c44b4-bc72w             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-92n4g    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m51s
	  kube-system                 amd-gpu-device-plugin-pkmlz                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 coredns-7c65d6cfc9-r5t4g                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m59s
	  kube-system                 etcd-addons-327804                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m4s
	  kube-system                 kube-apiserver-addons-327804                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-controller-manager-addons-327804        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 kube-proxy-2cbzc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 kube-scheduler-addons-327804                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 metrics-server-84c5f94fbc-4d528              100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         4m54s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  local-path-storage          local-path-provisioner-86d989889c-zwvjn      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m57s  kube-proxy       
	  Normal  Starting                 5m5s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m5s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m4s   kubelet          Node addons-327804 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m4s   kubelet          Node addons-327804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m4s   kubelet          Node addons-327804 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m3s   kubelet          Node addons-327804 status is now: NodeReady
	  Normal  RegisteredNode           5m     node-controller  Node addons-327804 event: Registered Node addons-327804 in Controller
	
	
	==> dmesg <==
	[  +0.080311] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.231975] systemd-fstab-generator[1327]: Ignoring "noauto" option for root device
	[  +0.147892] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.031426] kauditd_printk_skb: 140 callbacks suppressed
	[  +5.143707] kauditd_printk_skb: 127 callbacks suppressed
	[  +5.455973] kauditd_printk_skb: 64 callbacks suppressed
	[ +11.408867] kauditd_printk_skb: 5 callbacks suppressed
	[Dec 9 23:45] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.096501] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.890482] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.322026] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.387412] kauditd_printk_skb: 42 callbacks suppressed
	[  +8.508099] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.309521] kauditd_printk_skb: 13 callbacks suppressed
	[ +10.461136] kauditd_printk_skb: 14 callbacks suppressed
	[Dec 9 23:46] kauditd_printk_skb: 6 callbacks suppressed
	[ +12.602918] kauditd_printk_skb: 6 callbacks suppressed
	[ +11.566614] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.246945] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.889739] kauditd_printk_skb: 32 callbacks suppressed
	[Dec 9 23:47] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.050351] kauditd_printk_skb: 51 callbacks suppressed
	[ +10.776318] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.856995] kauditd_printk_skb: 7 callbacks suppressed
	[Dec 9 23:49] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [1d77a9f595d88a18bedce788dd2a66f58c67b3954f5dbd5ee5343fbecea91cc1] <==
	{"level":"info","ts":"2024-12-09T23:45:39.000134Z","caller":"traceutil/trace.go:171","msg":"trace[1563361563] transaction","detail":"{read_only:false; response_revision:1074; number_of_response:1; }","duration":"319.644022ms","start":"2024-12-09T23:45:38.680474Z","end":"2024-12-09T23:45:39.000118Z","steps":["trace[1563361563] 'process raft request'  (duration: 319.554765ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:45:39.000342Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T23:45:38.680460Z","time spent":"319.733217ms","remote":"127.0.0.1:51388","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1064 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-12-09T23:45:39.000659Z","caller":"traceutil/trace.go:171","msg":"trace[52526850] linearizableReadLoop","detail":"{readStateIndex:1107; appliedIndex:1107; }","duration":"220.218029ms","start":"2024-12-09T23:45:38.780432Z","end":"2024-12-09T23:45:39.000650Z","steps":["trace[52526850] 'read index received'  (duration: 220.215262ms)","trace[52526850] 'applied index is now lower than readState.Index'  (duration: 2.184µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-09T23:45:39.000778Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.296435ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T23:45:39.000801Z","caller":"traceutil/trace.go:171","msg":"trace[312828656] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1074; }","duration":"220.367395ms","start":"2024-12-09T23:45:38.780428Z","end":"2024-12-09T23:45:39.000795Z","steps":["trace[312828656] 'agreement among raft nodes before linearized reading'  (duration: 220.264249ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:45:39.001100Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.42476ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T23:45:39.001132Z","caller":"traceutil/trace.go:171","msg":"trace[1960441612] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1075; }","duration":"199.462473ms","start":"2024-12-09T23:45:38.801663Z","end":"2024-12-09T23:45:39.001125Z","steps":["trace[1960441612] 'agreement among raft nodes before linearized reading'  (duration: 199.391595ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:45:41.364219Z","caller":"traceutil/trace.go:171","msg":"trace[191935509] transaction","detail":"{read_only:false; response_revision:1083; number_of_response:1; }","duration":"356.874214ms","start":"2024-12-09T23:45:41.007331Z","end":"2024-12-09T23:45:41.364205Z","steps":["trace[191935509] 'process raft request'  (duration: 356.740068ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:45:41.364396Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T23:45:41.007314Z","time spent":"357.017477ms","remote":"127.0.0.1:51388","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1074 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-12-09T23:45:41.364860Z","caller":"traceutil/trace.go:171","msg":"trace[708113448] linearizableReadLoop","detail":"{readStateIndex:1116; appliedIndex:1116; }","duration":"271.885884ms","start":"2024-12-09T23:45:41.092964Z","end":"2024-12-09T23:45:41.364850Z","steps":["trace[708113448] 'read index received'  (duration: 271.882942ms)","trace[708113448] 'applied index is now lower than readState.Index'  (duration: 2.484µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-09T23:45:41.364996Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.021354ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T23:45:41.365052Z","caller":"traceutil/trace.go:171","msg":"trace[940995054] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1083; }","duration":"272.085806ms","start":"2024-12-09T23:45:41.092960Z","end":"2024-12-09T23:45:41.365046Z","steps":["trace[940995054] 'agreement among raft nodes before linearized reading'  (duration: 272.004677ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:45:41.368074Z","caller":"traceutil/trace.go:171","msg":"trace[1489611023] transaction","detail":"{read_only:false; response_revision:1084; number_of_response:1; }","duration":"257.692898ms","start":"2024-12-09T23:45:41.110370Z","end":"2024-12-09T23:45:41.368063Z","steps":["trace[1489611023] 'process raft request'  (duration: 257.536026ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:45:41.368330Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.11973ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-12-09T23:45:41.368378Z","caller":"traceutil/trace.go:171","msg":"trace[1600388161] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:1084; }","duration":"232.166903ms","start":"2024-12-09T23:45:41.136198Z","end":"2024-12-09T23:45:41.368365Z","steps":["trace[1600388161] 'agreement among raft nodes before linearized reading'  (duration: 232.109152ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:46:17.705801Z","caller":"traceutil/trace.go:171","msg":"trace[1992682220] linearizableReadLoop","detail":"{readStateIndex:1287; appliedIndex:1286; }","duration":"109.533231ms","start":"2024-12-09T23:46:17.596199Z","end":"2024-12-09T23:46:17.705732Z","steps":["trace[1992682220] 'read index received'  (duration: 109.316689ms)","trace[1992682220] 'applied index is now lower than readState.Index'  (duration: 215.822µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T23:46:17.705976Z","caller":"traceutil/trace.go:171","msg":"trace[242282482] transaction","detail":"{read_only:false; response_revision:1246; number_of_response:1; }","duration":"125.960265ms","start":"2024-12-09T23:46:17.579998Z","end":"2024-12-09T23:46:17.705959Z","steps":["trace[242282482] 'process raft request'  (duration: 125.560564ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:46:17.706078Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.885935ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T23:46:17.706139Z","caller":"traceutil/trace.go:171","msg":"trace[1713520814] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1246; }","duration":"109.956672ms","start":"2024-12-09T23:46:17.596173Z","end":"2024-12-09T23:46:17.706130Z","steps":["trace[1713520814] 'agreement among raft nodes before linearized reading'  (duration: 109.817238ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:46:32.679940Z","caller":"traceutil/trace.go:171","msg":"trace[1539081306] transaction","detail":"{read_only:false; response_revision:1307; number_of_response:1; }","duration":"360.10058ms","start":"2024-12-09T23:46:32.319823Z","end":"2024-12-09T23:46:32.679924Z","steps":["trace[1539081306] 'process raft request'  (duration: 359.743223ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:46:32.680244Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T23:46:32.319803Z","time spent":"360.294711ms","remote":"127.0.0.1:51492","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-327804\" mod_revision:1264 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-327804\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-327804\" > >"}
	{"level":"warn","ts":"2024-12-09T23:46:48.916144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"320.513091ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T23:46:48.916268Z","caller":"traceutil/trace.go:171","msg":"trace[1542012280] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1409; }","duration":"320.655307ms","start":"2024-12-09T23:46:48.595595Z","end":"2024-12-09T23:46:48.916250Z","steps":["trace[1542012280] 'range keys from in-memory index tree'  (duration: 320.495445ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:46:48.916269Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"262.83623ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T23:46:48.916313Z","caller":"traceutil/trace.go:171","msg":"trace[312283620] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1409; }","duration":"262.932259ms","start":"2024-12-09T23:46:48.653372Z","end":"2024-12-09T23:46:48.916304Z","steps":["trace[312283620] 'range keys from in-memory index tree'  (duration: 262.790265ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:49:23 up 5 min,  0 users,  load average: 0.45, 0.81, 0.45
	Linux addons-327804 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b886b264255fd9fe80ac49b1aca8ef8200cdaa6c5c343eb5887ac2f9f67978bf] <==
	 > logger="UnhandledError"
	E1209 23:46:20.191586       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.81.58:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.81.58:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.81.58:443: connect: connection refused" logger="UnhandledError"
	E1209 23:46:20.193224       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.81.58:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.81.58:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.81.58:443: connect: connection refused" logger="UnhandledError"
	E1209 23:46:20.199280       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.81.58:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.81.58:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.81.58:443: connect: connection refused" logger="UnhandledError"
	I1209 23:46:20.279030       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1209 23:46:27.688073       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.240.248"}
	I1209 23:46:58.300619       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1209 23:47:01.981025       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1209 23:47:02.154044       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.228.194"}
	I1209 23:47:08.029300       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1209 23:47:09.164023       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1209 23:47:25.690074       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:47:25.690128       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 23:47:25.719938       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:47:25.720037       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 23:47:25.766293       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:47:25.766388       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 23:47:25.834865       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:47:25.835018       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 23:47:25.878524       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:47:25.878571       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1209 23:47:26.835478       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1209 23:47:26.878469       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1209 23:47:26.898842       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1209 23:49:22.366967       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.162.113"}
	
	
	==> kube-controller-manager [6063e15fb3524d96dc672d687d4ca98c38148f7ee9be778dc199f39e6c8d3272] <==
	E1209 23:47:45.569711       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1209 23:47:53.444503       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1209 23:47:53.444552       1 shared_informer.go:320] Caches are synced for resource quota
	I1209 23:47:54.079937       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1209 23:47:54.079977       1 shared_informer.go:320] Caches are synced for garbage collector
	W1209 23:48:04.649237       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:48:04.649351       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:48:08.237911       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:48:08.237945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:48:09.988460       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:48:09.988571       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:48:10.916269       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:48:10.916325       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:48:46.330322       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:48:46.330418       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:48:47.680469       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:48:47.680571       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:48:50.766343       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:48:50.766575       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:48:50.916297       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:48:50.916397       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1209 23:49:22.189152       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="42.796961ms"
	I1209 23:49:22.204684       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="15.462333ms"
	I1209 23:49:22.204867       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="134.635µs"
	I1209 23:49:22.206069       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="31.097µs"
	
	
	==> kube-proxy [4c12f7a2107cd71062a09726bd3d76c5f44f89c88e8a790971efc9478227e5c4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 23:44:25.964014       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 23:44:25.982301       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.22"]
	E1209 23:44:25.982372       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 23:44:26.094533       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 23:44:26.094578       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 23:44:26.094610       1 server_linux.go:169] "Using iptables Proxier"
	I1209 23:44:26.101262       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 23:44:26.102877       1 server.go:483] "Version info" version="v1.31.2"
	I1209 23:44:26.102932       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 23:44:26.107311       1 config.go:199] "Starting service config controller"
	I1209 23:44:26.107333       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 23:44:26.107350       1 config.go:105] "Starting endpoint slice config controller"
	I1209 23:44:26.107354       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 23:44:26.107726       1 config.go:328] "Starting node config controller"
	I1209 23:44:26.107736       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 23:44:26.209684       1 shared_informer.go:320] Caches are synced for node config
	I1209 23:44:26.209721       1 shared_informer.go:320] Caches are synced for service config
	I1209 23:44:26.209788       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [273b5817c8ec56556f17299edd5c8b59d06c0b65fc84c875c837d9afc2dfa8cb] <==
	W1209 23:44:16.386374       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1209 23:44:16.387325       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:16.386500       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 23:44:16.387405       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:16.386599       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 23:44:16.387423       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:16.386680       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1209 23:44:16.387507       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:17.208521       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1209 23:44:17.208572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:17.350967       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1209 23:44:17.351031       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:17.450145       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1209 23:44:17.450192       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:17.458854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 23:44:17.458901       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:17.458977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 23:44:17.459005       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:17.500395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1209 23:44:17.500453       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:17.605711       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1209 23:44:17.605791       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:17.771064       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1209 23:44:17.771112       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1209 23:44:19.977675       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 09 23:49:19 addons-327804 kubelet[1204]: E1209 23:49:19.152985    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788159152656445,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:49:19 addons-327804 kubelet[1204]: E1209 23:49:19.153046    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788159152656445,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:49:22 addons-327804 kubelet[1204]: E1209 23:49:22.179638    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d20aef45-da7a-435c-9074-2b9dc1cd24db" containerName="csi-attacher"
	Dec 09 23:49:22 addons-327804 kubelet[1204]: E1209 23:49:22.180229    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d219c1ab-52ca-4d79-8e0c-1e31958bfda8" containerName="task-pv-container"
	Dec 09 23:49:22 addons-327804 kubelet[1204]: E1209 23:49:22.180294    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="206125d5-90c8-4598-b3aa-f9156187f289" containerName="node-driver-registrar"
	Dec 09 23:49:22 addons-327804 kubelet[1204]: E1209 23:49:22.180359    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="206125d5-90c8-4598-b3aa-f9156187f289" containerName="hostpath"
	Dec 09 23:49:22 addons-327804 kubelet[1204]: E1209 23:49:22.180416    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="206125d5-90c8-4598-b3aa-f9156187f289" containerName="liveness-probe"
	Dec 09 23:49:22 addons-327804 kubelet[1204]: E1209 23:49:22.180463    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2c529bb9-d4dd-41aa-ae16-5fd1853d334c" containerName="volume-snapshot-controller"
	Dec 09 23:49:22 addons-327804 kubelet[1204]: E1209 23:49:22.180526    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="206125d5-90c8-4598-b3aa-f9156187f289" containerName="csi-snapshotter"
	Dec 09 23:49:22 addons-327804 kubelet[1204]: E1209 23:49:22.180569    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6b3d1329-f736-4c18-8da6-a2e60b272146" containerName="volume-snapshot-controller"
	Dec 09 23:49:22 addons-327804 kubelet[1204]: E1209 23:49:22.180602    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="206125d5-90c8-4598-b3aa-f9156187f289" containerName="csi-external-health-monitor-controller"
	Dec 09 23:49:22 addons-327804 kubelet[1204]: E1209 23:49:22.180636    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="206125d5-90c8-4598-b3aa-f9156187f289" containerName="csi-provisioner"
	Dec 09 23:49:22 addons-327804 kubelet[1204]: E1209 23:49:22.180668    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="23152550-a282-425c-afac-778089918479" containerName="csi-resizer"
	Dec 09 23:49:22 addons-327804 kubelet[1204]: I1209 23:49:22.180812    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="206125d5-90c8-4598-b3aa-f9156187f289" containerName="csi-snapshotter"
	Dec 09 23:49:22 addons-327804 kubelet[1204]: I1209 23:49:22.180860    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="206125d5-90c8-4598-b3aa-f9156187f289" containerName="csi-external-health-monitor-controller"
	Dec 09 23:49:22 addons-327804 kubelet[1204]: I1209 23:49:22.180891    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="206125d5-90c8-4598-b3aa-f9156187f289" containerName="node-driver-registrar"
	Dec 09 23:49:22 addons-327804 kubelet[1204]: I1209 23:49:22.180922    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="206125d5-90c8-4598-b3aa-f9156187f289" containerName="hostpath"
	Dec 09 23:49:22 addons-327804 kubelet[1204]: I1209 23:49:22.180952    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b3d1329-f736-4c18-8da6-a2e60b272146" containerName="volume-snapshot-controller"
	Dec 09 23:49:22 addons-327804 kubelet[1204]: I1209 23:49:22.180983    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c529bb9-d4dd-41aa-ae16-5fd1853d334c" containerName="volume-snapshot-controller"
	Dec 09 23:49:22 addons-327804 kubelet[1204]: I1209 23:49:22.181014    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="d219c1ab-52ca-4d79-8e0c-1e31958bfda8" containerName="task-pv-container"
	Dec 09 23:49:22 addons-327804 kubelet[1204]: I1209 23:49:22.181044    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="206125d5-90c8-4598-b3aa-f9156187f289" containerName="csi-provisioner"
	Dec 09 23:49:22 addons-327804 kubelet[1204]: I1209 23:49:22.181074    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="206125d5-90c8-4598-b3aa-f9156187f289" containerName="liveness-probe"
	Dec 09 23:49:22 addons-327804 kubelet[1204]: I1209 23:49:22.181106    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="23152550-a282-425c-afac-778089918479" containerName="csi-resizer"
	Dec 09 23:49:22 addons-327804 kubelet[1204]: I1209 23:49:22.181136    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="d20aef45-da7a-435c-9074-2b9dc1cd24db" containerName="csi-attacher"
	Dec 09 23:49:22 addons-327804 kubelet[1204]: I1209 23:49:22.273120    1204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfllz\" (UniqueName: \"kubernetes.io/projected/9388cfab-df21-4794-9e5f-bfb3d41b1b70-kube-api-access-gfllz\") pod \"hello-world-app-55bf9c44b4-bc72w\" (UID: \"9388cfab-df21-4794-9e5f-bfb3d41b1b70\") " pod="default/hello-world-app-55bf9c44b4-bc72w"
	
	
	==> storage-provisioner [477d0ec756e0a69860c40ffccc988a97d8bb376ef42c03ae36978781110b8bc3] <==
	I1209 23:44:30.417277       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 23:44:30.446309       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 23:44:30.446376       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 23:44:30.467001       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 23:44:30.467126       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-327804_a0856a7e-3fbd-4790-9663-a09ea878408e!
	I1209 23:44:30.468600       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"71e33a61-5d8a-4fa0-8994-9afd2fadca64", APIVersion:"v1", ResourceVersion:"600", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-327804_a0856a7e-3fbd-4790-9663-a09ea878408e became leader
	I1209 23:44:30.567849       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-327804_a0856a7e-3fbd-4790-9663-a09ea878408e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-327804 -n addons-327804
helpers_test.go:261: (dbg) Run:  kubectl --context addons-327804 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-bc72w ingress-nginx-admission-create-lrjq2 ingress-nginx-admission-patch-h5nmw
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-327804 describe pod hello-world-app-55bf9c44b4-bc72w ingress-nginx-admission-create-lrjq2 ingress-nginx-admission-patch-h5nmw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-327804 describe pod hello-world-app-55bf9c44b4-bc72w ingress-nginx-admission-create-lrjq2 ingress-nginx-admission-patch-h5nmw: exit status 1 (68.865979ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-bc72w
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-327804/192.168.39.22
	Start Time:       Mon, 09 Dec 2024 23:49:22 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gfllz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gfllz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-bc72w to addons-327804
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-lrjq2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-h5nmw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-327804 describe pod hello-world-app-55bf9c44b4-bc72w ingress-nginx-admission-create-lrjq2 ingress-nginx-admission-patch-h5nmw: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-327804 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-327804 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-327804 addons disable ingress --alsologtostderr -v=1: (7.688178065s)
--- FAIL: TestAddons/parallel/Ingress (151.52s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (364.54s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.426124ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-4d528" [8de05551-49ab-4933-852a-16b88842a109] Running
I1209 23:46:26.957448   86296 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1209 23:46:26.957482   86296 kapi.go:107] duration metric: took 11.49808ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003557856s
addons_test.go:402: (dbg) Run:  kubectl --context addons-327804 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-327804 top pods -n kube-system: exit status 1 (66.779314ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-pkmlz, age: 2m7.018395265s

                                                
                                                
** /stderr **
I1209 23:46:33.020577   86296 retry.go:31] will retry after 3.810034302s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-327804 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-327804 top pods -n kube-system: exit status 1 (67.182566ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-pkmlz, age: 2m10.896414238s

                                                
                                                
** /stderr **
I1209 23:46:36.898419   86296 retry.go:31] will retry after 6.466207339s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-327804 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-327804 top pods -n kube-system: exit status 1 (78.441867ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-pkmlz, age: 2m17.441414957s

                                                
                                                
** /stderr **
I1209 23:46:43.443566   86296 retry.go:31] will retry after 3.597222107s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-327804 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-327804 top pods -n kube-system: exit status 1 (66.687443ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-pkmlz, age: 2m21.106286108s

                                                
                                                
** /stderr **
I1209 23:46:47.108717   86296 retry.go:31] will retry after 14.911846133s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-327804 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-327804 top pods -n kube-system: exit status 1 (69.632699ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-pkmlz, age: 2m36.088509583s

                                                
                                                
** /stderr **
I1209 23:47:02.090745   86296 retry.go:31] will retry after 17.223503562s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-327804 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-327804 top pods -n kube-system: exit status 1 (61.012635ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-pkmlz, age: 2m53.373555645s

                                                
                                                
** /stderr **
I1209 23:47:19.375709   86296 retry.go:31] will retry after 30.783640374s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-327804 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-327804 top pods -n kube-system: exit status 1 (66.687176ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-pkmlz, age: 3m24.224528646s

                                                
                                                
** /stderr **
I1209 23:47:50.226627   86296 retry.go:31] will retry after 43.179685984s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-327804 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-327804 top pods -n kube-system: exit status 1 (62.501336ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-pkmlz, age: 4m7.472178231s

                                                
                                                
** /stderr **
I1209 23:48:33.474262   86296 retry.go:31] will retry after 1m5.855901592s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-327804 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-327804 top pods -n kube-system: exit status 1 (64.821166ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-pkmlz, age: 5m13.394155424s

                                                
                                                
** /stderr **
I1209 23:49:39.396671   86296 retry.go:31] will retry after 1m0.756307745s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-327804 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-327804 top pods -n kube-system: exit status 1 (60.954759ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-pkmlz, age: 6m14.214381674s

                                                
                                                
** /stderr **
I1209 23:50:40.216597   86296 retry.go:31] will retry after 55.38143436s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-327804 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-327804 top pods -n kube-system: exit status 1 (63.723826ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-pkmlz, age: 7m9.66009414s

                                                
                                                
** /stderr **
I1209 23:51:35.662167   86296 retry.go:31] will retry after 53.386805376s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-327804 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-327804 top pods -n kube-system: exit status 1 (62.569235ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-pkmlz, age: 8m3.110778228s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-327804 -n addons-327804
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-327804 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-327804 logs -n 25: (1.071378346s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-279229                                                                     | download-only-279229 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC | 09 Dec 24 23:43 UTC |
	| delete  | -p download-only-539681                                                                     | download-only-539681 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC | 09 Dec 24 23:43 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-419481 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC |                     |
	|         | binary-mirror-419481                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41707                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-419481                                                                     | binary-mirror-419481 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC | 09 Dec 24 23:43 UTC |
	| addons  | disable dashboard -p                                                                        | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC |                     |
	|         | addons-327804                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC |                     |
	|         | addons-327804                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-327804 --wait=true                                                                | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC | 09 Dec 24 23:45 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-327804 addons disable                                                                | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:45 UTC | 09 Dec 24 23:45 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-327804 addons disable                                                                | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:45 UTC | 09 Dec 24 23:46 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | -p addons-327804                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-327804 addons disable                                                                | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-327804 ip                                                                            | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	| addons  | addons-327804 addons disable                                                                | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-327804 addons disable                                                                | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-327804 addons                                                                        | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:46 UTC | 09 Dec 24 23:46 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-327804 ssh cat                                                                       | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | /opt/local-path-provisioner/pvc-d933e89a-c1b5-434b-bf3c-35e985eb04c2_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-327804 addons                                                                        | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-327804 addons disable                                                                | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-327804 addons                                                                        | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-327804 ssh curl -s                                                                   | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-327804 addons                                                                        | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-327804 addons                                                                        | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:47 UTC | 09 Dec 24 23:47 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-327804 ip                                                                            | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:49 UTC | 09 Dec 24 23:49 UTC |
	| addons  | addons-327804 addons disable                                                                | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:49 UTC | 09 Dec 24 23:49 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-327804 addons disable                                                                | addons-327804        | jenkins | v1.34.0 | 09 Dec 24 23:49 UTC | 09 Dec 24 23:49 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:43:40
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:43:40.797815   86928 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:43:40.797941   86928 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:43:40.797951   86928 out.go:358] Setting ErrFile to fd 2...
	I1209 23:43:40.797955   86928 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:43:40.798164   86928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1209 23:43:40.798829   86928 out.go:352] Setting JSON to false
	I1209 23:43:40.799678   86928 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5172,"bootTime":1733782649,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:43:40.799766   86928 start.go:139] virtualization: kvm guest
	I1209 23:43:40.801628   86928 out.go:177] * [addons-327804] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:43:40.803152   86928 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 23:43:40.803155   86928 notify.go:220] Checking for updates...
	I1209 23:43:40.804421   86928 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:43:40.805674   86928 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1209 23:43:40.806748   86928 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1209 23:43:40.807838   86928 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1209 23:43:40.808861   86928 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 23:43:40.810037   86928 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:43:40.840708   86928 out.go:177] * Using the kvm2 driver based on user configuration
	I1209 23:43:40.841813   86928 start.go:297] selected driver: kvm2
	I1209 23:43:40.841833   86928 start.go:901] validating driver "kvm2" against <nil>
	I1209 23:43:40.841851   86928 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 23:43:40.842524   86928 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:43:40.842643   86928 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 23:43:40.856864   86928 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 23:43:40.856908   86928 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 23:43:40.857223   86928 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:43:40.857269   86928 cni.go:84] Creating CNI manager for ""
	I1209 23:43:40.857327   86928 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:43:40.857340   86928 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 23:43:40.857398   86928 start.go:340] cluster config:
	{Name:addons-327804 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-327804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:43:40.857549   86928 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:43:40.859092   86928 out.go:177] * Starting "addons-327804" primary control-plane node in "addons-327804" cluster
	I1209 23:43:40.860222   86928 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:43:40.860249   86928 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1209 23:43:40.860268   86928 cache.go:56] Caching tarball of preloaded images
	I1209 23:43:40.860354   86928 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1209 23:43:40.860368   86928 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1209 23:43:40.860769   86928 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/config.json ...
	I1209 23:43:40.860796   86928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/config.json: {Name:mk75ac48819931541f6e8d216a32d3d7747b635e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:43:40.860941   86928 start.go:360] acquireMachinesLock for addons-327804: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 23:43:40.861012   86928 start.go:364] duration metric: took 55.128µs to acquireMachinesLock for "addons-327804"
	I1209 23:43:40.861038   86928 start.go:93] Provisioning new machine with config: &{Name:addons-327804 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-327804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 23:43:40.861090   86928 start.go:125] createHost starting for "" (driver="kvm2")
	I1209 23:43:40.862489   86928 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1209 23:43:40.862647   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:43:40.862687   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:43:40.875854   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45893
	I1209 23:43:40.876367   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:43:40.877017   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:43:40.877043   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:43:40.877383   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:43:40.877557   86928 main.go:141] libmachine: (addons-327804) Calling .GetMachineName
	I1209 23:43:40.877674   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:43:40.877787   86928 start.go:159] libmachine.API.Create for "addons-327804" (driver="kvm2")
	I1209 23:43:40.877822   86928 client.go:168] LocalClient.Create starting
	I1209 23:43:40.877859   86928 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem
	I1209 23:43:40.954333   86928 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem
	I1209 23:43:41.072464   86928 main.go:141] libmachine: Running pre-create checks...
	I1209 23:43:41.072488   86928 main.go:141] libmachine: (addons-327804) Calling .PreCreateCheck
	I1209 23:43:41.072961   86928 main.go:141] libmachine: (addons-327804) Calling .GetConfigRaw
	I1209 23:43:41.073400   86928 main.go:141] libmachine: Creating machine...
	I1209 23:43:41.073412   86928 main.go:141] libmachine: (addons-327804) Calling .Create
	I1209 23:43:41.073541   86928 main.go:141] libmachine: (addons-327804) Creating KVM machine...
	I1209 23:43:41.074849   86928 main.go:141] libmachine: (addons-327804) DBG | found existing default KVM network
	I1209 23:43:41.075569   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:41.075394   86950 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002011f0}
	I1209 23:43:41.075593   86928 main.go:141] libmachine: (addons-327804) DBG | created network xml: 
	I1209 23:43:41.075603   86928 main.go:141] libmachine: (addons-327804) DBG | <network>
	I1209 23:43:41.075609   86928 main.go:141] libmachine: (addons-327804) DBG |   <name>mk-addons-327804</name>
	I1209 23:43:41.075615   86928 main.go:141] libmachine: (addons-327804) DBG |   <dns enable='no'/>
	I1209 23:43:41.075619   86928 main.go:141] libmachine: (addons-327804) DBG |   
	I1209 23:43:41.075625   86928 main.go:141] libmachine: (addons-327804) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1209 23:43:41.075635   86928 main.go:141] libmachine: (addons-327804) DBG |     <dhcp>
	I1209 23:43:41.075641   86928 main.go:141] libmachine: (addons-327804) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1209 23:43:41.075646   86928 main.go:141] libmachine: (addons-327804) DBG |     </dhcp>
	I1209 23:43:41.075651   86928 main.go:141] libmachine: (addons-327804) DBG |   </ip>
	I1209 23:43:41.075658   86928 main.go:141] libmachine: (addons-327804) DBG |   
	I1209 23:43:41.075663   86928 main.go:141] libmachine: (addons-327804) DBG | </network>
	I1209 23:43:41.075669   86928 main.go:141] libmachine: (addons-327804) DBG | 
	I1209 23:43:41.080831   86928 main.go:141] libmachine: (addons-327804) DBG | trying to create private KVM network mk-addons-327804 192.168.39.0/24...
	I1209 23:43:41.144777   86928 main.go:141] libmachine: (addons-327804) DBG | private KVM network mk-addons-327804 192.168.39.0/24 created
	I1209 23:43:41.144832   86928 main.go:141] libmachine: (addons-327804) Setting up store path in /home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804 ...
	I1209 23:43:41.144856   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:41.144754   86950 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1209 23:43:41.144875   86928 main.go:141] libmachine: (addons-327804) Building disk image from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 23:43:41.144981   86928 main.go:141] libmachine: (addons-327804) Downloading /home/jenkins/minikube-integration/20062-79135/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1209 23:43:41.414966   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:41.414844   86950 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa...
	I1209 23:43:41.750891   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:41.750756   86950 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/addons-327804.rawdisk...
	I1209 23:43:41.750921   86928 main.go:141] libmachine: (addons-327804) DBG | Writing magic tar header
	I1209 23:43:41.750929   86928 main.go:141] libmachine: (addons-327804) DBG | Writing SSH key tar header
	I1209 23:43:41.751004   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:41.750937   86950 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804 ...
	I1209 23:43:41.751065   86928 main.go:141] libmachine: (addons-327804) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804
	I1209 23:43:41.751091   86928 main.go:141] libmachine: (addons-327804) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804 (perms=drwx------)
	I1209 23:43:41.751112   86928 main.go:141] libmachine: (addons-327804) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines
	I1209 23:43:41.751124   86928 main.go:141] libmachine: (addons-327804) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines (perms=drwxr-xr-x)
	I1209 23:43:41.751137   86928 main.go:141] libmachine: (addons-327804) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube (perms=drwxr-xr-x)
	I1209 23:43:41.751143   86928 main.go:141] libmachine: (addons-327804) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135 (perms=drwxrwxr-x)
	I1209 23:43:41.751170   86928 main.go:141] libmachine: (addons-327804) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1209 23:43:41.751181   86928 main.go:141] libmachine: (addons-327804) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1209 23:43:41.751191   86928 main.go:141] libmachine: (addons-327804) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135
	I1209 23:43:41.751203   86928 main.go:141] libmachine: (addons-327804) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1209 23:43:41.751222   86928 main.go:141] libmachine: (addons-327804) Creating domain...
	I1209 23:43:41.751234   86928 main.go:141] libmachine: (addons-327804) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1209 23:43:41.751244   86928 main.go:141] libmachine: (addons-327804) DBG | Checking permissions on dir: /home/jenkins
	I1209 23:43:41.751256   86928 main.go:141] libmachine: (addons-327804) DBG | Checking permissions on dir: /home
	I1209 23:43:41.751275   86928 main.go:141] libmachine: (addons-327804) DBG | Skipping /home - not owner
	I1209 23:43:41.752397   86928 main.go:141] libmachine: (addons-327804) define libvirt domain using xml: 
	I1209 23:43:41.752427   86928 main.go:141] libmachine: (addons-327804) <domain type='kvm'>
	I1209 23:43:41.752435   86928 main.go:141] libmachine: (addons-327804)   <name>addons-327804</name>
	I1209 23:43:41.752440   86928 main.go:141] libmachine: (addons-327804)   <memory unit='MiB'>4000</memory>
	I1209 23:43:41.752445   86928 main.go:141] libmachine: (addons-327804)   <vcpu>2</vcpu>
	I1209 23:43:41.752451   86928 main.go:141] libmachine: (addons-327804)   <features>
	I1209 23:43:41.752458   86928 main.go:141] libmachine: (addons-327804)     <acpi/>
	I1209 23:43:41.752468   86928 main.go:141] libmachine: (addons-327804)     <apic/>
	I1209 23:43:41.752476   86928 main.go:141] libmachine: (addons-327804)     <pae/>
	I1209 23:43:41.752482   86928 main.go:141] libmachine: (addons-327804)     
	I1209 23:43:41.752493   86928 main.go:141] libmachine: (addons-327804)   </features>
	I1209 23:43:41.752503   86928 main.go:141] libmachine: (addons-327804)   <cpu mode='host-passthrough'>
	I1209 23:43:41.752533   86928 main.go:141] libmachine: (addons-327804)   
	I1209 23:43:41.752569   86928 main.go:141] libmachine: (addons-327804)   </cpu>
	I1209 23:43:41.752582   86928 main.go:141] libmachine: (addons-327804)   <os>
	I1209 23:43:41.752592   86928 main.go:141] libmachine: (addons-327804)     <type>hvm</type>
	I1209 23:43:41.752601   86928 main.go:141] libmachine: (addons-327804)     <boot dev='cdrom'/>
	I1209 23:43:41.752610   86928 main.go:141] libmachine: (addons-327804)     <boot dev='hd'/>
	I1209 23:43:41.752638   86928 main.go:141] libmachine: (addons-327804)     <bootmenu enable='no'/>
	I1209 23:43:41.752655   86928 main.go:141] libmachine: (addons-327804)   </os>
	I1209 23:43:41.752669   86928 main.go:141] libmachine: (addons-327804)   <devices>
	I1209 23:43:41.752684   86928 main.go:141] libmachine: (addons-327804)     <disk type='file' device='cdrom'>
	I1209 23:43:41.752699   86928 main.go:141] libmachine: (addons-327804)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/boot2docker.iso'/>
	I1209 23:43:41.752709   86928 main.go:141] libmachine: (addons-327804)       <target dev='hdc' bus='scsi'/>
	I1209 23:43:41.752724   86928 main.go:141] libmachine: (addons-327804)       <readonly/>
	I1209 23:43:41.752735   86928 main.go:141] libmachine: (addons-327804)     </disk>
	I1209 23:43:41.752748   86928 main.go:141] libmachine: (addons-327804)     <disk type='file' device='disk'>
	I1209 23:43:41.752764   86928 main.go:141] libmachine: (addons-327804)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1209 23:43:41.752784   86928 main.go:141] libmachine: (addons-327804)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/addons-327804.rawdisk'/>
	I1209 23:43:41.752794   86928 main.go:141] libmachine: (addons-327804)       <target dev='hda' bus='virtio'/>
	I1209 23:43:41.752800   86928 main.go:141] libmachine: (addons-327804)     </disk>
	I1209 23:43:41.752809   86928 main.go:141] libmachine: (addons-327804)     <interface type='network'>
	I1209 23:43:41.752819   86928 main.go:141] libmachine: (addons-327804)       <source network='mk-addons-327804'/>
	I1209 23:43:41.752833   86928 main.go:141] libmachine: (addons-327804)       <model type='virtio'/>
	I1209 23:43:41.752844   86928 main.go:141] libmachine: (addons-327804)     </interface>
	I1209 23:43:41.752855   86928 main.go:141] libmachine: (addons-327804)     <interface type='network'>
	I1209 23:43:41.752868   86928 main.go:141] libmachine: (addons-327804)       <source network='default'/>
	I1209 23:43:41.752875   86928 main.go:141] libmachine: (addons-327804)       <model type='virtio'/>
	I1209 23:43:41.752892   86928 main.go:141] libmachine: (addons-327804)     </interface>
	I1209 23:43:41.752907   86928 main.go:141] libmachine: (addons-327804)     <serial type='pty'>
	I1209 23:43:41.752919   86928 main.go:141] libmachine: (addons-327804)       <target port='0'/>
	I1209 23:43:41.752928   86928 main.go:141] libmachine: (addons-327804)     </serial>
	I1209 23:43:41.752936   86928 main.go:141] libmachine: (addons-327804)     <console type='pty'>
	I1209 23:43:41.752949   86928 main.go:141] libmachine: (addons-327804)       <target type='serial' port='0'/>
	I1209 23:43:41.752960   86928 main.go:141] libmachine: (addons-327804)     </console>
	I1209 23:43:41.752972   86928 main.go:141] libmachine: (addons-327804)     <rng model='virtio'>
	I1209 23:43:41.752979   86928 main.go:141] libmachine: (addons-327804)       <backend model='random'>/dev/random</backend>
	I1209 23:43:41.752987   86928 main.go:141] libmachine: (addons-327804)     </rng>
	I1209 23:43:41.752995   86928 main.go:141] libmachine: (addons-327804)     
	I1209 23:43:41.753005   86928 main.go:141] libmachine: (addons-327804)     
	I1209 23:43:41.753013   86928 main.go:141] libmachine: (addons-327804)   </devices>
	I1209 23:43:41.753022   86928 main.go:141] libmachine: (addons-327804) </domain>
	I1209 23:43:41.753031   86928 main.go:141] libmachine: (addons-327804) 
	I1209 23:43:41.756834   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:99:f1:eb in network default
	I1209 23:43:41.757484   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:41.757499   86928 main.go:141] libmachine: (addons-327804) Ensuring networks are active...
	I1209 23:43:41.758125   86928 main.go:141] libmachine: (addons-327804) Ensuring network default is active
	I1209 23:43:41.758480   86928 main.go:141] libmachine: (addons-327804) Ensuring network mk-addons-327804 is active
	I1209 23:43:41.759017   86928 main.go:141] libmachine: (addons-327804) Getting domain xml...
	I1209 23:43:41.759722   86928 main.go:141] libmachine: (addons-327804) Creating domain...
	I1209 23:43:42.926326   86928 main.go:141] libmachine: (addons-327804) Waiting to get IP...
	I1209 23:43:42.927176   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:42.927507   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:42.927535   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:42.927485   86950 retry.go:31] will retry after 270.923204ms: waiting for machine to come up
	I1209 23:43:43.200163   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:43.200573   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:43.200598   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:43.200560   86950 retry.go:31] will retry after 363.249732ms: waiting for machine to come up
	I1209 23:43:43.565030   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:43.565407   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:43.565432   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:43.565376   86950 retry.go:31] will retry after 406.688542ms: waiting for machine to come up
	I1209 23:43:43.973817   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:43.974220   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:43.974250   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:43.974166   86950 retry.go:31] will retry after 504.435555ms: waiting for machine to come up
	I1209 23:43:44.479835   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:44.480175   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:44.480204   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:44.480127   86950 retry.go:31] will retry after 630.106447ms: waiting for machine to come up
	I1209 23:43:45.111920   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:45.112378   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:45.112403   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:45.112329   86950 retry.go:31] will retry after 841.474009ms: waiting for machine to come up
	I1209 23:43:45.954929   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:45.955348   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:45.955377   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:45.955312   86950 retry.go:31] will retry after 945.238556ms: waiting for machine to come up
	I1209 23:43:46.902593   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:46.902917   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:46.902946   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:46.902874   86950 retry.go:31] will retry after 1.369231385s: waiting for machine to come up
	I1209 23:43:48.273670   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:48.274128   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:48.274160   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:48.274075   86950 retry.go:31] will retry after 1.549923986s: waiting for machine to come up
	I1209 23:43:49.825784   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:49.826227   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:49.826250   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:49.826161   86950 retry.go:31] will retry after 2.038935598s: waiting for machine to come up
	I1209 23:43:51.866265   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:51.866767   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:51.866795   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:51.866712   86950 retry.go:31] will retry after 2.246478528s: waiting for machine to come up
	I1209 23:43:54.116049   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:54.116426   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:54.116449   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:54.116371   86950 retry.go:31] will retry after 3.260771273s: waiting for machine to come up
	I1209 23:43:57.379356   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:43:57.379779   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find current IP address of domain addons-327804 in network mk-addons-327804
	I1209 23:43:57.379802   86928 main.go:141] libmachine: (addons-327804) DBG | I1209 23:43:57.379739   86950 retry.go:31] will retry after 4.229679028s: waiting for machine to come up
	I1209 23:44:01.610807   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:01.611231   86928 main.go:141] libmachine: (addons-327804) Found IP for machine: 192.168.39.22
	I1209 23:44:01.611267   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has current primary IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:01.611280   86928 main.go:141] libmachine: (addons-327804) Reserving static IP address...
	I1209 23:44:01.611660   86928 main.go:141] libmachine: (addons-327804) DBG | unable to find host DHCP lease matching {name: "addons-327804", mac: "52:54:00:6e:5b:83", ip: "192.168.39.22"} in network mk-addons-327804
	I1209 23:44:01.681860   86928 main.go:141] libmachine: (addons-327804) Reserved static IP address: 192.168.39.22
	I1209 23:44:01.681893   86928 main.go:141] libmachine: (addons-327804) Waiting for SSH to be available...
	I1209 23:44:01.681902   86928 main.go:141] libmachine: (addons-327804) DBG | Getting to WaitForSSH function...
	I1209 23:44:01.684772   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:01.685211   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:01.685243   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:01.685412   86928 main.go:141] libmachine: (addons-327804) DBG | Using SSH client type: external
	I1209 23:44:01.685437   86928 main.go:141] libmachine: (addons-327804) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa (-rw-------)
	I1209 23:44:01.685471   86928 main.go:141] libmachine: (addons-327804) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1209 23:44:01.685485   86928 main.go:141] libmachine: (addons-327804) DBG | About to run SSH command:
	I1209 23:44:01.685501   86928 main.go:141] libmachine: (addons-327804) DBG | exit 0
	I1209 23:44:01.814171   86928 main.go:141] libmachine: (addons-327804) DBG | SSH cmd err, output: <nil>: 
	I1209 23:44:01.814483   86928 main.go:141] libmachine: (addons-327804) KVM machine creation complete!
	I1209 23:44:01.814883   86928 main.go:141] libmachine: (addons-327804) Calling .GetConfigRaw
	I1209 23:44:01.815500   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:01.815690   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:01.815796   86928 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1209 23:44:01.815819   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:01.817177   86928 main.go:141] libmachine: Detecting operating system of created instance...
	I1209 23:44:01.817195   86928 main.go:141] libmachine: Waiting for SSH to be available...
	I1209 23:44:01.817202   86928 main.go:141] libmachine: Getting to WaitForSSH function...
	I1209 23:44:01.817210   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:01.819407   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:01.819751   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:01.819777   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:01.819904   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:01.820083   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:01.820228   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:01.820336   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:01.820458   86928 main.go:141] libmachine: Using SSH client type: native
	I1209 23:44:01.820694   86928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1209 23:44:01.820705   86928 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1209 23:44:01.929249   86928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:44:01.929278   86928 main.go:141] libmachine: Detecting the provisioner...
	I1209 23:44:01.929285   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:01.931934   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:01.932282   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:01.932311   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:01.932490   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:01.932695   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:01.932846   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:01.932964   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:01.933095   86928 main.go:141] libmachine: Using SSH client type: native
	I1209 23:44:01.933272   86928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1209 23:44:01.933283   86928 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1209 23:44:02.042800   86928 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1209 23:44:02.042869   86928 main.go:141] libmachine: found compatible host: buildroot
	I1209 23:44:02.042878   86928 main.go:141] libmachine: Provisioning with buildroot...
	I1209 23:44:02.042897   86928 main.go:141] libmachine: (addons-327804) Calling .GetMachineName
	I1209 23:44:02.043162   86928 buildroot.go:166] provisioning hostname "addons-327804"
	I1209 23:44:02.043195   86928 main.go:141] libmachine: (addons-327804) Calling .GetMachineName
	I1209 23:44:02.043431   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:02.046239   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:02.046727   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:02.046756   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:02.046931   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:02.047130   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:02.047290   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:02.047408   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:02.047607   86928 main.go:141] libmachine: Using SSH client type: native
	I1209 23:44:02.047822   86928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1209 23:44:02.047836   86928 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-327804 && echo "addons-327804" | sudo tee /etc/hostname
	I1209 23:44:02.171028   86928 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-327804
	
	I1209 23:44:02.171070   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:02.173742   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:02.174068   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:02.174102   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:02.174315   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:02.174510   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:02.174708   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:02.174870   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:02.175042   86928 main.go:141] libmachine: Using SSH client type: native
	I1209 23:44:02.175264   86928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1209 23:44:02.175282   86928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-327804' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-327804/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-327804' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 23:44:02.295301   86928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 23:44:02.295339   86928 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1209 23:44:02.295376   86928 buildroot.go:174] setting up certificates
	I1209 23:44:02.295389   86928 provision.go:84] configureAuth start
	I1209 23:44:02.295400   86928 main.go:141] libmachine: (addons-327804) Calling .GetMachineName
	I1209 23:44:02.295707   86928 main.go:141] libmachine: (addons-327804) Calling .GetIP
	I1209 23:44:02.298422   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:02.298771   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:02.298802   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:02.298911   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:02.301005   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:02.301320   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:02.301349   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:02.301510   86928 provision.go:143] copyHostCerts
	I1209 23:44:02.301603   86928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1209 23:44:02.301776   86928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1209 23:44:02.301888   86928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1209 23:44:02.302051   86928 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.addons-327804 san=[127.0.0.1 192.168.39.22 addons-327804 localhost minikube]
	I1209 23:44:02.392285   86928 provision.go:177] copyRemoteCerts
	I1209 23:44:02.392358   86928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 23:44:02.392385   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:02.395299   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:02.395647   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:02.395676   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:02.395899   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:02.396075   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:02.396234   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:02.396368   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:02.479905   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 23:44:02.502117   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1209 23:44:02.523286   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1209 23:44:02.546305   86928 provision.go:87] duration metric: took 250.901798ms to configureAuth
	I1209 23:44:02.546339   86928 buildroot.go:189] setting minikube options for container-runtime
	I1209 23:44:02.546495   86928 config.go:182] Loaded profile config "addons-327804": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:44:02.546618   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:02.549341   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:02.549788   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:02.549811   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:02.549945   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:02.550137   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:02.550291   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:02.550455   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:02.550621   86928 main.go:141] libmachine: Using SSH client type: native
	I1209 23:44:02.550834   86928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1209 23:44:02.550856   86928 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1209 23:44:03.099509   86928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1209 23:44:03.099536   86928 main.go:141] libmachine: Checking connection to Docker...
	I1209 23:44:03.099544   86928 main.go:141] libmachine: (addons-327804) Calling .GetURL
	I1209 23:44:03.100900   86928 main.go:141] libmachine: (addons-327804) DBG | Using libvirt version 6000000
	I1209 23:44:03.103437   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.103743   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:03.103772   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.103964   86928 main.go:141] libmachine: Docker is up and running!
	I1209 23:44:03.103976   86928 main.go:141] libmachine: Reticulating splines...
	I1209 23:44:03.103984   86928 client.go:171] duration metric: took 22.226152223s to LocalClient.Create
	I1209 23:44:03.104006   86928 start.go:167] duration metric: took 22.226220642s to libmachine.API.Create "addons-327804"
	I1209 23:44:03.104024   86928 start.go:293] postStartSetup for "addons-327804" (driver="kvm2")
	I1209 23:44:03.104036   86928 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 23:44:03.104053   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:03.104257   86928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 23:44:03.104286   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:03.106425   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.106773   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:03.106801   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.106947   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:03.107102   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:03.107246   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:03.107367   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:03.192050   86928 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 23:44:03.195674   86928 info.go:137] Remote host: Buildroot 2023.02.9
	I1209 23:44:03.195701   86928 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1209 23:44:03.195778   86928 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1209 23:44:03.195806   86928 start.go:296] duration metric: took 91.77425ms for postStartSetup
	I1209 23:44:03.195842   86928 main.go:141] libmachine: (addons-327804) Calling .GetConfigRaw
	I1209 23:44:03.214336   86928 main.go:141] libmachine: (addons-327804) Calling .GetIP
	I1209 23:44:03.216753   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.217097   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:03.217125   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.217379   86928 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/config.json ...
	I1209 23:44:03.278348   86928 start.go:128] duration metric: took 22.417241644s to createHost
	I1209 23:44:03.278391   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:03.280868   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.281165   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:03.281215   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.281329   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:03.281538   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:03.281690   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:03.281829   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:03.281997   86928 main.go:141] libmachine: Using SSH client type: native
	I1209 23:44:03.282175   86928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I1209 23:44:03.282195   86928 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 23:44:03.394890   86928 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733787843.369809494
	
	I1209 23:44:03.394926   86928 fix.go:216] guest clock: 1733787843.369809494
	I1209 23:44:03.394934   86928 fix.go:229] Guest: 2024-12-09 23:44:03.369809494 +0000 UTC Remote: 2024-12-09 23:44:03.278372278 +0000 UTC m=+22.516027277 (delta=91.437216ms)
	I1209 23:44:03.394979   86928 fix.go:200] guest clock delta is within tolerance: 91.437216ms
	I1209 23:44:03.394993   86928 start.go:83] releasing machines lock for "addons-327804", held for 22.533968839s
	I1209 23:44:03.395016   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:03.395271   86928 main.go:141] libmachine: (addons-327804) Calling .GetIP
	I1209 23:44:03.397874   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.398210   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:03.398243   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.398418   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:03.398862   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:03.399024   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:03.399110   86928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 23:44:03.399151   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:03.399183   86928 ssh_runner.go:195] Run: cat /version.json
	I1209 23:44:03.399208   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:03.401550   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.401771   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.401912   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:03.401938   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.402080   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:03.402095   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:03.402106   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:03.402268   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:03.402285   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:03.402434   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:03.402494   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:03.402636   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:03.402640   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:03.402759   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:03.503405   86928 ssh_runner.go:195] Run: systemctl --version
	I1209 23:44:03.509148   86928 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1209 23:44:04.143482   86928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 23:44:04.149978   86928 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 23:44:04.150058   86928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1209 23:44:04.164249   86928 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 23:44:04.164289   86928 start.go:495] detecting cgroup driver to use...
	I1209 23:44:04.164357   86928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 23:44:04.179572   86928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 23:44:04.192217   86928 docker.go:217] disabling cri-docker service (if available) ...
	I1209 23:44:04.192263   86928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1209 23:44:04.204386   86928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1209 23:44:04.216516   86928 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1209 23:44:04.330735   86928 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1209 23:44:04.470835   86928 docker.go:233] disabling docker service ...
	I1209 23:44:04.470912   86928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1209 23:44:04.485544   86928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1209 23:44:04.497698   86928 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1209 23:44:04.633101   86928 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1209 23:44:04.742096   86928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1209 23:44:04.754394   86928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 23:44:04.770407   86928 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1209 23:44:04.770460   86928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:04.779547   86928 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1209 23:44:04.779597   86928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:04.788850   86928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:04.797834   86928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:04.806902   86928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 23:44:04.816191   86928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:04.825058   86928 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:04.839776   86928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1209 23:44:04.848904   86928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 23:44:04.857138   86928 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1209 23:44:04.857180   86928 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1209 23:44:04.869011   86928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 23:44:04.877184   86928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:44:04.994409   86928 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1209 23:44:05.083715   86928 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1209 23:44:05.083806   86928 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1209 23:44:05.088015   86928 start.go:563] Will wait 60s for crictl version
	I1209 23:44:05.088067   86928 ssh_runner.go:195] Run: which crictl
	I1209 23:44:05.091453   86928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 23:44:05.125461   86928 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1209 23:44:05.125557   86928 ssh_runner.go:195] Run: crio --version
	I1209 23:44:05.150068   86928 ssh_runner.go:195] Run: crio --version
	I1209 23:44:05.176119   86928 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1209 23:44:05.177267   86928 main.go:141] libmachine: (addons-327804) Calling .GetIP
	I1209 23:44:05.180022   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:05.180478   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:05.180498   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:05.180737   86928 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1209 23:44:05.184334   86928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:44:05.195606   86928 kubeadm.go:883] updating cluster {Name:addons-327804 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-327804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1209 23:44:05.195708   86928 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1209 23:44:05.195745   86928 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:44:05.228699   86928 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1209 23:44:05.228757   86928 ssh_runner.go:195] Run: which lz4
	I1209 23:44:05.232192   86928 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 23:44:05.235703   86928 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 23:44:05.235730   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1209 23:44:06.400205   86928 crio.go:462] duration metric: took 1.168034461s to copy over tarball
	I1209 23:44:06.400280   86928 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 23:44:08.366438   86928 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.966106221s)
	I1209 23:44:08.366474   86928 crio.go:469] duration metric: took 1.966239202s to extract the tarball
	I1209 23:44:08.366483   86928 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 23:44:08.402189   86928 ssh_runner.go:195] Run: sudo crictl images --output json
	I1209 23:44:08.441003   86928 crio.go:514] all images are preloaded for cri-o runtime.
	I1209 23:44:08.441026   86928 cache_images.go:84] Images are preloaded, skipping loading
	I1209 23:44:08.441034   86928 kubeadm.go:934] updating node { 192.168.39.22 8443 v1.31.2 crio true true} ...
	I1209 23:44:08.441172   86928 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-327804 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-327804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 23:44:08.441249   86928 ssh_runner.go:195] Run: crio config
	I1209 23:44:08.483454   86928 cni.go:84] Creating CNI manager for ""
	I1209 23:44:08.483477   86928 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:44:08.483486   86928 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 23:44:08.483511   86928 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.22 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-327804 NodeName:addons-327804 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 23:44:08.483660   86928 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.22
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-327804"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.22"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.22"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 23:44:08.483734   86928 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1209 23:44:08.492640   86928 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 23:44:08.492708   86928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 23:44:08.501462   86928 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1209 23:44:08.516710   86928 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 23:44:08.530966   86928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I1209 23:44:08.545576   86928 ssh_runner.go:195] Run: grep 192.168.39.22	control-plane.minikube.internal$ /etc/hosts
	I1209 23:44:08.548900   86928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.22	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 23:44:08.559450   86928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:44:08.675550   86928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:44:08.691022   86928 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804 for IP: 192.168.39.22
	I1209 23:44:08.691046   86928 certs.go:194] generating shared ca certs ...
	I1209 23:44:08.691065   86928 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:08.691207   86928 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1209 23:44:08.942897   86928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt ...
	I1209 23:44:08.942927   86928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt: {Name:mkf2978b46aec7c7d5417e4710a2b718935c7d19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:08.943087   86928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key ...
	I1209 23:44:08.943098   86928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key: {Name:mkf00ec6ca7c6015e1d641e357e85d6ce1c54cdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:08.943170   86928 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1209 23:44:09.220123   86928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt ...
	I1209 23:44:09.220150   86928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt: {Name:mk56f9f07e96af9ce9147ed2b56a10686bae6c48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:09.220320   86928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key ...
	I1209 23:44:09.220334   86928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key: {Name:mkee0faa24d1c6cf590bf83ee394a96e62ebb923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:09.220403   86928 certs.go:256] generating profile certs ...
	I1209 23:44:09.220485   86928 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.key
	I1209 23:44:09.220501   86928 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt with IP's: []
	I1209 23:44:09.351458   86928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt ...
	I1209 23:44:09.351486   86928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: {Name:mk11cb4170a81b64e18c85f9fa97b4f70e4ea9fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:09.351635   86928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.key ...
	I1209 23:44:09.351645   86928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.key: {Name:mk8c977160e45fcfce49e593a5b4639fe8980487 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:09.351712   86928 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/apiserver.key.eb562c7d
	I1209 23:44:09.351729   86928 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/apiserver.crt.eb562c7d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.22]
	I1209 23:44:09.452195   86928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/apiserver.crt.eb562c7d ...
	I1209 23:44:09.452226   86928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/apiserver.crt.eb562c7d: {Name:mkafce7c2457e1bd7194ec34cf3560cce14a69fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:09.452380   86928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/apiserver.key.eb562c7d ...
	I1209 23:44:09.452392   86928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/apiserver.key.eb562c7d: {Name:mkf3b659da29c5208a8f2793c35495cfa2f39e2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:09.452469   86928 certs.go:381] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/apiserver.crt.eb562c7d -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/apiserver.crt
	I1209 23:44:09.452543   86928 certs.go:385] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/apiserver.key.eb562c7d -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/apiserver.key
	I1209 23:44:09.452588   86928 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/proxy-client.key
	I1209 23:44:09.452606   86928 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/proxy-client.crt with IP's: []
	I1209 23:44:09.530678   86928 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/proxy-client.crt ...
	I1209 23:44:09.530712   86928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/proxy-client.crt: {Name:mk5d5f84a2f92697814cfa67a696461679d0d719 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:09.530880   86928 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/proxy-client.key ...
	I1209 23:44:09.530893   86928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/proxy-client.key: {Name:mk2fc5c1c90ecbd59db084f19e469dfa742178a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:09.531074   86928 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1209 23:44:09.531118   86928 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1209 23:44:09.531146   86928 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1209 23:44:09.531174   86928 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1209 23:44:09.531759   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 23:44:09.557306   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1209 23:44:09.579044   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 23:44:09.604914   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1209 23:44:09.626547   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1209 23:44:09.647592   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1209 23:44:09.668535   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 23:44:09.689536   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1209 23:44:09.710681   86928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 23:44:09.732012   86928 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 23:44:09.746842   86928 ssh_runner.go:195] Run: openssl version
	I1209 23:44:09.752203   86928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 23:44:09.761799   86928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:44:09.765922   86928 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:44:09.765975   86928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 23:44:09.771392   86928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 23:44:09.781109   86928 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 23:44:09.784743   86928 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1209 23:44:09.784795   86928 kubeadm.go:392] StartCluster: {Name:addons-327804 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-327804 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:44:09.784893   86928 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1209 23:44:09.784936   86928 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1209 23:44:09.816553   86928 cri.go:89] found id: ""
	I1209 23:44:09.816640   86928 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 23:44:09.826151   86928 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 23:44:09.834916   86928 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 23:44:09.843339   86928 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 23:44:09.843360   86928 kubeadm.go:157] found existing configuration files:
	
	I1209 23:44:09.843404   86928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1209 23:44:09.851337   86928 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 23:44:09.851376   86928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 23:44:09.859578   86928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1209 23:44:09.867468   86928 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 23:44:09.867511   86928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 23:44:09.875698   86928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1209 23:44:09.883651   86928 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 23:44:09.883695   86928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 23:44:09.892038   86928 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1209 23:44:09.899869   86928 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 23:44:09.899922   86928 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 23:44:09.908140   86928 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 23:44:10.055322   86928 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 23:44:19.611197   86928 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1209 23:44:19.611269   86928 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 23:44:19.611398   86928 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 23:44:19.611524   86928 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 23:44:19.611616   86928 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1209 23:44:19.611668   86928 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 23:44:19.612939   86928 out.go:235]   - Generating certificates and keys ...
	I1209 23:44:19.613025   86928 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 23:44:19.613100   86928 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 23:44:19.613227   86928 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1209 23:44:19.613302   86928 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1209 23:44:19.613393   86928 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1209 23:44:19.613442   86928 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1209 23:44:19.613488   86928 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1209 23:44:19.613661   86928 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-327804 localhost] and IPs [192.168.39.22 127.0.0.1 ::1]
	I1209 23:44:19.613745   86928 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1209 23:44:19.613914   86928 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-327804 localhost] and IPs [192.168.39.22 127.0.0.1 ::1]
	I1209 23:44:19.614016   86928 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1209 23:44:19.614132   86928 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1209 23:44:19.614193   86928 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1209 23:44:19.614270   86928 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 23:44:19.614331   86928 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 23:44:19.614387   86928 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1209 23:44:19.614428   86928 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 23:44:19.614477   86928 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 23:44:19.614523   86928 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 23:44:19.614638   86928 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 23:44:19.614739   86928 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 23:44:19.616024   86928 out.go:235]   - Booting up control plane ...
	I1209 23:44:19.616144   86928 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 23:44:19.616252   86928 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 23:44:19.616342   86928 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 23:44:19.616495   86928 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 23:44:19.616618   86928 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 23:44:19.616682   86928 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 23:44:19.616867   86928 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1209 23:44:19.617011   86928 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1209 23:44:19.617102   86928 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.07041ms
	I1209 23:44:19.617193   86928 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1209 23:44:19.617282   86928 kubeadm.go:310] [api-check] The API server is healthy after 5.002351731s
	I1209 23:44:19.617455   86928 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 23:44:19.617594   86928 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 23:44:19.617651   86928 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 23:44:19.617885   86928 kubeadm.go:310] [mark-control-plane] Marking the node addons-327804 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 23:44:19.617976   86928 kubeadm.go:310] [bootstrap-token] Using token: 1dhh9t.u8r2jfyc7htbxy61
	I1209 23:44:19.620165   86928 out.go:235]   - Configuring RBAC rules ...
	I1209 23:44:19.620264   86928 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 23:44:19.620351   86928 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 23:44:19.620505   86928 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 23:44:19.620663   86928 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 23:44:19.620825   86928 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 23:44:19.620935   86928 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 23:44:19.621066   86928 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 23:44:19.621107   86928 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 23:44:19.621171   86928 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 23:44:19.621190   86928 kubeadm.go:310] 
	I1209 23:44:19.621269   86928 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 23:44:19.621279   86928 kubeadm.go:310] 
	I1209 23:44:19.621394   86928 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 23:44:19.621406   86928 kubeadm.go:310] 
	I1209 23:44:19.621438   86928 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 23:44:19.621521   86928 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 23:44:19.621593   86928 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 23:44:19.621604   86928 kubeadm.go:310] 
	I1209 23:44:19.621667   86928 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 23:44:19.621676   86928 kubeadm.go:310] 
	I1209 23:44:19.621712   86928 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 23:44:19.621718   86928 kubeadm.go:310] 
	I1209 23:44:19.621766   86928 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 23:44:19.621837   86928 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 23:44:19.621910   86928 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 23:44:19.621923   86928 kubeadm.go:310] 
	I1209 23:44:19.622027   86928 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 23:44:19.622134   86928 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 23:44:19.622146   86928 kubeadm.go:310] 
	I1209 23:44:19.622244   86928 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1dhh9t.u8r2jfyc7htbxy61 \
	I1209 23:44:19.622381   86928 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 \
	I1209 23:44:19.622417   86928 kubeadm.go:310] 	--control-plane 
	I1209 23:44:19.622425   86928 kubeadm.go:310] 
	I1209 23:44:19.622553   86928 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 23:44:19.622572   86928 kubeadm.go:310] 
	I1209 23:44:19.622695   86928 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1dhh9t.u8r2jfyc7htbxy61 \
	I1209 23:44:19.622869   86928 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 
	I1209 23:44:19.622882   86928 cni.go:84] Creating CNI manager for ""
	I1209 23:44:19.622888   86928 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:44:19.624611   86928 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 23:44:19.625877   86928 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 23:44:19.637101   86928 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 23:44:19.657389   86928 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 23:44:19.657507   86928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:19.657521   86928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-327804 minikube.k8s.io/updated_at=2024_12_09T23_44_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=addons-327804 minikube.k8s.io/primary=true
	I1209 23:44:19.673438   86928 ops.go:34] apiserver oom_adj: -16
	I1209 23:44:19.779257   86928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:20.279456   86928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:20.779940   86928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:21.279524   86928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:21.779979   86928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:22.279777   86928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:22.779962   86928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:23.279824   86928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:23.780274   86928 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 23:44:23.854779   86928 kubeadm.go:1113] duration metric: took 4.197332151s to wait for elevateKubeSystemPrivileges
	I1209 23:44:23.854825   86928 kubeadm.go:394] duration metric: took 14.070033437s to StartCluster
	I1209 23:44:23.854854   86928 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:23.854988   86928 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1209 23:44:23.855559   86928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 23:44:23.855785   86928 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1209 23:44:23.855817   86928 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1209 23:44:23.855863   86928 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1209 23:44:23.855988   86928 addons.go:69] Setting yakd=true in profile "addons-327804"
	I1209 23:44:23.856010   86928 addons.go:234] Setting addon yakd=true in "addons-327804"
	I1209 23:44:23.856005   86928 addons.go:69] Setting metrics-server=true in profile "addons-327804"
	I1209 23:44:23.856028   86928 addons.go:69] Setting volcano=true in profile "addons-327804"
	I1209 23:44:23.856027   86928 addons.go:69] Setting storage-provisioner=true in profile "addons-327804"
	I1209 23:44:23.856048   86928 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-327804"
	I1209 23:44:23.856053   86928 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-327804"
	I1209 23:44:23.856061   86928 addons.go:69] Setting volumesnapshots=true in profile "addons-327804"
	I1209 23:44:23.856065   86928 addons.go:234] Setting addon storage-provisioner=true in "addons-327804"
	I1209 23:44:23.856072   86928 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-327804"
	I1209 23:44:23.856086   86928 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-327804"
	I1209 23:44:23.856084   86928 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-327804"
	I1209 23:44:23.856098   86928 addons.go:69] Setting registry=true in profile "addons-327804"
	I1209 23:44:23.856110   86928 addons.go:69] Setting ingress=true in profile "addons-327804"
	I1209 23:44:23.856112   86928 addons.go:69] Setting ingress-dns=true in profile "addons-327804"
	I1209 23:44:23.856117   86928 addons.go:234] Setting addon registry=true in "addons-327804"
	I1209 23:44:23.856120   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.856122   86928 addons.go:234] Setting addon ingress=true in "addons-327804"
	I1209 23:44:23.856126   86928 addons.go:234] Setting addon ingress-dns=true in "addons-327804"
	I1209 23:44:23.856139   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.856155   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.856155   86928 addons.go:69] Setting default-storageclass=true in profile "addons-327804"
	I1209 23:44:23.856187   86928 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-327804"
	I1209 23:44:23.856150   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.856560   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.856571   86928 addons.go:69] Setting gcp-auth=true in profile "addons-327804"
	I1209 23:44:23.856586   86928 mustload.go:65] Loading cluster: addons-327804
	I1209 23:44:23.856587   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.856589   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.856605   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.856075   86928 addons.go:234] Setting addon volumesnapshots=true in "addons-327804"
	I1209 23:44:23.856631   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.856632   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.856645   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.856048   86928 addons.go:234] Setting addon volcano=true in "addons-327804"
	I1209 23:44:23.855992   86928 addons.go:69] Setting cloud-spanner=true in profile "addons-327804"
	I1209 23:44:23.856700   86928 addons.go:69] Setting inspektor-gadget=true in profile "addons-327804"
	I1209 23:44:23.856605   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.856713   86928 addons.go:234] Setting addon inspektor-gadget=true in "addons-327804"
	I1209 23:44:23.856700   86928 addons.go:234] Setting addon cloud-spanner=true in "addons-327804"
	I1209 23:44:23.856040   86928 addons.go:234] Setting addon metrics-server=true in "addons-327804"
	I1209 23:44:23.856730   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.856757   86928 config.go:182] Loaded profile config "addons-327804": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:44:23.856102   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.856106   86928 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-327804"
	I1209 23:44:23.856088   86928 config.go:182] Loaded profile config "addons-327804": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1209 23:44:23.856019   86928 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-327804"
	I1209 23:44:23.856963   86928 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-327804"
	I1209 23:44:23.856561   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.856987   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.857058   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.856050   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.857080   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.857113   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.857132   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.857148   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.857169   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.857187   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.857203   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.857390   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.857406   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.857412   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.857424   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.857570   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.857706   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.857710   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.857734   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.857876   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.857972   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.858001   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.858155   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.858199   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.858215   86928 out.go:177] * Verifying Kubernetes components...
	I1209 23:44:23.858479   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.859661   86928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 23:44:23.872292   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43285
	I1209 23:44:23.875068   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.875113   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.875195   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.875212   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.875517   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.875549   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.875911   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.876689   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.876712   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.876816   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44697
	I1209 23:44:23.876954   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40029
	I1209 23:44:23.877141   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.885609   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:23.885707   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.885707   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.886246   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.886263   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.886455   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.886471   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.886890   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.886954   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.887228   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:23.887325   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:23.889735   86928 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-327804"
	I1209 23:44:23.889782   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.890126   86928 addons.go:234] Setting addon default-storageclass=true in "addons-327804"
	I1209 23:44:23.890151   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.890170   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.890185   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.890538   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.890584   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.891220   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:23.891584   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.891618   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.901644   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32925
	I1209 23:44:23.902150   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.902772   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.902795   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.903383   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.904096   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.904135   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.904700   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46295
	I1209 23:44:23.905067   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.905667   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.905692   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.906097   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.906664   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.906701   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.914227   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41549
	I1209 23:44:23.914623   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.915212   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.915232   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.915611   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.916142   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.916179   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.916528   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44593
	I1209 23:44:23.917012   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.917546   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.917565   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.917631   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41841
	I1209 23:44:23.918103   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.918163   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.918241   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33605
	I1209 23:44:23.918917   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.918958   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.919251   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.919266   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.919322   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34769
	I1209 23:44:23.919599   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.920116   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.920150   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.920364   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.920924   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.920941   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.921424   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.921482   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34755
	I1209 23:44:23.921778   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.922208   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.922240   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.922434   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.922976   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.922995   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.923817   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.923834   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.924153   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.930445   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.930450   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43381
	I1209 23:44:23.930834   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.931355   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.931381   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.931721   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.935272   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41333
	I1209 23:44:23.935618   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.936123   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.936142   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.936212   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42849
	I1209 23:44:23.936739   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.936783   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.937139   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.937159   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.937195   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:23.937510   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.939017   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.939055   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.939059   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.939091   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.939114   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.939153   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.939636   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.939671   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.939845   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38551
	I1209 23:44:23.940637   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.941131   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.941156   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.941466   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.941529   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36239
	I1209 23:44:23.942118   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.942158   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.942873   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.943338   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.943364   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.943705   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.943881   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:23.944611   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33147
	I1209 23:44:23.945105   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.945697   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.945715   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.946219   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.946276   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:23.946406   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:23.948187   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:23.948192   86928 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1209 23:44:23.949289   86928 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1209 23:44:23.949421   86928 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 23:44:23.949437   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1209 23:44:23.949457   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:23.951767   86928 out.go:177]   - Using image docker.io/registry:2.8.3
	I1209 23:44:23.952183   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:23.952594   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:23.952617   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:23.952871   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:23.952875   86928 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1209 23:44:23.952890   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1209 23:44:23.952907   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:23.953013   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:23.953116   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:23.953211   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:23.953847   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46791
	I1209 23:44:23.954384   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.954977   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.955002   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.955339   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.955842   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.955856   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36783
	I1209 23:44:23.955876   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.956363   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.956903   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.956922   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.957355   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.957563   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:23.958295   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:23.959131   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:23.959143   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:23.959169   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:23.959307   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:23.959549   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:23.959752   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:23.960932   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:23.961699   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37095
	I1209 23:44:23.962261   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.962982   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.963000   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.963121   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I1209 23:44:23.963286   86928 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1209 23:44:23.964400   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.964422   86928 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1209 23:44:23.965013   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.965033   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.965437   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.966018   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.966059   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.966264   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42365
	I1209 23:44:23.966472   86928 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1209 23:44:23.966787   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.967393   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.967410   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.967464   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.967700   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:23.968647   86928 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1209 23:44:23.969005   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.969664   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.969705   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.970328   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:23.970875   86928 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1209 23:44:23.971731   86928 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1209 23:44:23.971777   86928 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1209 23:44:23.972912   86928 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 23:44:23.972933   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1209 23:44:23.972952   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:23.973438   86928 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1209 23:44:23.974624   86928 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1209 23:44:23.975739   86928 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1209 23:44:23.975764   86928 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1209 23:44:23.975783   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:23.976227   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:23.976780   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:23.976809   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:23.976944   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:23.977026   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40863
	I1209 23:44:23.977362   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:23.977611   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:23.977842   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:23.978501   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.979067   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.979084   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.979144   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42227
	I1209 23:44:23.979514   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.979905   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:23.979975   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.979986   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.980328   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.980393   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:23.980411   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:23.980444   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:23.981064   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:23.981107   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:23.981319   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:23.981358   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.981406   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37197
	I1209 23:44:23.981462   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:23.981580   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:23.981652   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:23.981994   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.982479   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.982501   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.982965   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.983172   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:23.985527   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:23.987363   86928 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1209 23:44:23.988232   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37631
	I1209 23:44:23.988416   86928 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1209 23:44:23.988429   86928 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1209 23:44:23.988446   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:23.989650   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:23.989679   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.990399   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.990425   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.990887   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.991112   86928 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1209 23:44:23.991131   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:23.992081   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39713
	I1209 23:44:23.992325   86928 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 23:44:23.992346   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1209 23:44:23.992364   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:23.992411   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:23.992840   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:23.992871   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:23.993057   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:23.993120   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.993353   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:23.993548   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:23.993729   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:23.994255   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.994272   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.994327   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:23.995940   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:23.996034   86928 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1209 23:44:23.996261   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.996325   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:23.996340   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:23.996545   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:23.996879   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40167
	I1209 23:44:23.996904   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:23.997025   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:23.997205   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:23.997482   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.997752   86928 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1209 23:44:23.997768   86928 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1209 23:44:23.997785   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:23.998077   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:23.998099   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:23.998475   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:23.998528   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:23.998949   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42945
	I1209 23:44:23.999289   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:23.999349   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44007
	I1209 23:44:23.999492   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:23.999719   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:24.000122   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:24.000152   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:24.000480   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:24.000618   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:24.000634   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:24.000693   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:24.000743   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:24.001278   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:24.001656   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:24.002683   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:24.002702   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.002735   86928 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1209 23:44:24.003178   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:24.003216   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.003412   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:24.003529   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:24.003602   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:24.003762   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:24.003880   86928 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 23:44:24.003993   86928 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 23:44:24.004017   86928 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 23:44:24.004031   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:24.004043   86928 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1209 23:44:24.004052   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1209 23:44:24.004064   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:24.004086   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:24.004556   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:24.005344   86928 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:44:24.005360   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 23:44:24.005378   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:24.005843   86928 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1209 23:44:24.007124   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42167
	I1209 23:44:24.007182   86928 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1209 23:44:24.007197   86928 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1209 23:44:24.007222   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:24.007880   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:24.008378   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41813
	I1209 23:44:24.008559   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:24.008581   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:24.008722   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.008798   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.008841   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45905
	I1209 23:44:24.009096   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:24.009322   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:24.009340   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.009383   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:24.009459   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:24.009859   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:24.009929   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:24.009943   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.010061   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:24.010080   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:24.010157   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:24.010204   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:24.010300   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:24.010308   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:24.010352   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:24.010400   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33507
	I1209 23:44:24.010439   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:24.010926   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:24.011373   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:24.011440   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:24.011480   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:24.011489   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:24.011502   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:24.011566   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:24.011723   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:24.011782   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:24.011963   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:24.012131   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:24.012311   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:24.012446   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:24.012641   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.013263   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.013685   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:24.013703   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.013749   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:24.014061   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:24.014068   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:24.014073   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:24.014443   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:24.014453   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:24.014471   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:24.014479   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:24.014487   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:24.014488   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:24.014744   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:24.014988   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:24.015014   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.015280   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:24.015327   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:24.015327   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:24.015335   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	W1209 23:44:24.015397   86928 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1209 23:44:24.015562   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:24.015562   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:24.015703   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:24.015705   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:24.015846   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:24.015850   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:24.015958   86928 out.go:177]   - Using image docker.io/busybox:stable
	I1209 23:44:24.016007   86928 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1209 23:44:24.016008   86928 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 23:44:24.017425   86928 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1209 23:44:24.017444   86928 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1209 23:44:24.017469   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:24.017470   86928 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 23:44:24.017531   86928 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1209 23:44:24.018801   86928 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1209 23:44:24.018831   86928 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 23:44:24.018845   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1209 23:44:24.018866   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:24.019988   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.020043   86928 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 23:44:24.020062   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1209 23:44:24.020078   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:24.020361   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:24.020387   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.020545   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:24.020697   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:24.020814   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:24.020934   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:24.022425   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.022879   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:24.022906   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.023109   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:24.023252   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:24.023287   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.023401   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:24.023527   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:24.023741   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:24.023758   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:24.023907   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:24.024062   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:24.024207   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:24.024309   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:24.385405   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1209 23:44:24.425587   86928 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1209 23:44:24.425628   86928 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1209 23:44:24.443975   86928 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1209 23:44:24.444003   86928 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1209 23:44:24.497909   86928 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1209 23:44:24.497938   86928 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1209 23:44:24.513248   86928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 23:44:24.513367   86928 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1209 23:44:24.553500   86928 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1209 23:44:24.553527   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1209 23:44:24.576672   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1209 23:44:24.580288   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1209 23:44:24.593316   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1209 23:44:24.595716   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 23:44:24.598693   86928 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1209 23:44:24.598716   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1209 23:44:24.609390   86928 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1209 23:44:24.609408   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1209 23:44:24.611226   86928 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1209 23:44:24.611240   86928 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1209 23:44:24.613280   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 23:44:24.615115   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1209 23:44:24.616544   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1209 23:44:24.645751   86928 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1209 23:44:24.645772   86928 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1209 23:44:24.747585   86928 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1209 23:44:24.747616   86928 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1209 23:44:24.758366   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1209 23:44:24.769892   86928 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1209 23:44:24.769909   86928 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1209 23:44:24.784902   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1209 23:44:24.804978   86928 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1209 23:44:24.804999   86928 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1209 23:44:24.840231   86928 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1209 23:44:24.840258   86928 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1209 23:44:24.956704   86928 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1209 23:44:24.956734   86928 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1209 23:44:24.963585   86928 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1209 23:44:24.963610   86928 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1209 23:44:24.983142   86928 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:44:24.983165   86928 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1209 23:44:25.028304   86928 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1209 23:44:25.028334   86928 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1209 23:44:25.173249   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1209 23:44:25.194747   86928 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1209 23:44:25.194776   86928 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1209 23:44:25.214411   86928 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 23:44:25.214435   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1209 23:44:25.216820   86928 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1209 23:44:25.216838   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1209 23:44:25.423729   86928 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1209 23:44:25.423760   86928 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1209 23:44:25.437459   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 23:44:25.457853   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1209 23:44:25.731266   86928 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1209 23:44:25.731289   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1209 23:44:26.179725   86928 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1209 23:44:26.179763   86928 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1209 23:44:26.463793   86928 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1209 23:44:26.463816   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1209 23:44:26.557953   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.17251362s)
	I1209 23:44:26.557997   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:26.558008   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:26.558323   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:26.558340   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:26.558348   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:26.558354   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:26.558670   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:26.558686   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:26.558720   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:26.686941   86928 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1209 23:44:26.686980   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1209 23:44:26.749689   86928 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.236401378s)
	I1209 23:44:26.749725   86928 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.236316316s)
	I1209 23:44:26.749754   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.17304165s)
	I1209 23:44:26.749754   86928 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1209 23:44:26.749797   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:26.749813   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:26.750139   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:26.750182   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:26.750197   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:26.750206   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:26.750491   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:26.750508   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:26.750798   86928 node_ready.go:35] waiting up to 6m0s for node "addons-327804" to be "Ready" ...
	I1209 23:44:26.768727   86928 node_ready.go:49] node "addons-327804" has status "Ready":"True"
	I1209 23:44:26.768748   86928 node_ready.go:38] duration metric: took 17.924325ms for node "addons-327804" to be "Ready" ...
	I1209 23:44:26.768756   86928 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:44:26.792104   86928 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace to be "Ready" ...
	I1209 23:44:27.100615   86928 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 23:44:27.100652   86928 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1209 23:44:27.257506   86928 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-327804" context rescaled to 1 replicas
	I1209 23:44:27.394984   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1209 23:44:27.637459   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.057127618s)
	I1209 23:44:27.637544   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:27.637561   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:27.638001   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:27.638012   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:27.638044   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:27.638066   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:27.638080   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:27.638390   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:27.638408   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:28.798788   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:30.826051   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:31.008311   86928 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1209 23:44:31.008367   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:31.012341   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:31.012909   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:31.012939   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:31.013209   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:31.013399   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:31.013557   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:31.013715   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:31.563991   86928 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1209 23:44:31.678979   86928 addons.go:234] Setting addon gcp-auth=true in "addons-327804"
	I1209 23:44:31.679040   86928 host.go:66] Checking if "addons-327804" exists ...
	I1209 23:44:31.679355   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:31.679400   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:31.694819   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38183
	I1209 23:44:31.695362   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:31.695907   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:31.695934   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:31.696307   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:31.696762   86928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1209 23:44:31.696811   86928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1209 23:44:31.712222   86928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I1209 23:44:31.712708   86928 main.go:141] libmachine: () Calling .GetVersion
	I1209 23:44:31.713216   86928 main.go:141] libmachine: Using API Version  1
	I1209 23:44:31.713245   86928 main.go:141] libmachine: () Calling .SetConfigRaw
	I1209 23:44:31.713574   86928 main.go:141] libmachine: () Calling .GetMachineName
	I1209 23:44:31.713765   86928 main.go:141] libmachine: (addons-327804) Calling .GetState
	I1209 23:44:31.715540   86928 main.go:141] libmachine: (addons-327804) Calling .DriverName
	I1209 23:44:31.715760   86928 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1209 23:44:31.715786   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHHostname
	I1209 23:44:31.718599   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:31.719076   86928 main.go:141] libmachine: (addons-327804) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:5b:83", ip: ""} in network mk-addons-327804: {Iface:virbr1 ExpiryTime:2024-12-10 00:43:55 +0000 UTC Type:0 Mac:52:54:00:6e:5b:83 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:addons-327804 Clientid:01:52:54:00:6e:5b:83}
	I1209 23:44:31.719108   86928 main.go:141] libmachine: (addons-327804) DBG | domain addons-327804 has defined IP address 192.168.39.22 and MAC address 52:54:00:6e:5b:83 in network mk-addons-327804
	I1209 23:44:31.719274   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHPort
	I1209 23:44:31.719452   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHKeyPath
	I1209 23:44:31.719620   86928 main.go:141] libmachine: (addons-327804) Calling .GetSSHUsername
	I1209 23:44:31.719763   86928 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/addons-327804/id_rsa Username:docker}
	I1209 23:44:32.301404   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.708056236s)
	I1209 23:44:32.301457   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.301473   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.301487   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.705744381s)
	I1209 23:44:32.301529   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.301544   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.301561   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.688261924s)
	I1209 23:44:32.301586   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.301603   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.301668   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.686522911s)
	I1209 23:44:32.301705   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.301722   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.301786   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.301797   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.301805   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.301812   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.301813   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.685247937s)
	I1209 23:44:32.301834   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.301844   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.301843   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.301884   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.301892   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.301899   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.301905   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.301944   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.54355653s)
	I1209 23:44:32.301960   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.301969   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.301983   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.302000   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.302013   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.302021   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.302029   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.302029   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.517102194s)
	I1209 23:44:32.302048   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.302056   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.302093   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.1288127s)
	I1209 23:44:32.302104   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.302114   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.302119   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.302126   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.302139   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.302145   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.302152   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.302157   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.302198   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.302204   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.302213   86928 addons.go:475] Verifying addon ingress=true in "addons-327804"
	I1209 23:44:32.302259   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.864767899s)
	W1209 23:44:32.302287   86928 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 23:44:32.302371   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.844493328s)
	I1209 23:44:32.302396   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.302405   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.302469   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.302492   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.302498   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.302506   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.302512   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.303389   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.303419   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.303426   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.303434   86928 addons.go:475] Verifying addon metrics-server=true in "addons-327804"
	I1209 23:44:32.305109   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.305143   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.305150   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.305271   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.305282   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.305289   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.305303   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.305312   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.305319   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.305394   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.305464   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.305489   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.305495   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.305505   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.305511   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.305690   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.305712   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.305717   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.305825   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.305854   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.305860   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.306283   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.306296   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.306304   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.306311   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.306377   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.306401   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.306408   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.306415   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.306422   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.306594   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.306604   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.307376   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.307387   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.307392   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.307406   86928 addons.go:475] Verifying addon registry=true in "addons-327804"
	I1209 23:44:32.307413   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.307442   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.307450   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.302325   86928 retry.go:31] will retry after 297.02029ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1209 23:44:32.308218   86928 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-327804 service yakd-dashboard -n yakd-dashboard
	
	I1209 23:44:32.308961   86928 out.go:177] * Verifying registry addon...
	I1209 23:44:32.309829   86928 out.go:177] * Verifying ingress addon...
	I1209 23:44:32.310987   86928 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1209 23:44:32.311612   86928 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1209 23:44:32.323319   86928 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1209 23:44:32.323339   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:32.327050   86928 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1209 23:44:32.327075   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:32.332104   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.332128   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.332217   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:32.332240   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:32.332475   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.332491   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	W1209 23:44:32.332613   86928 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1209 23:44:32.332621   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:32.332658   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:32.332645   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:32.605544   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1209 23:44:32.816424   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:32.817813   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:33.313388   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:33.325991   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:33.326015   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:34.190112   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:34.190836   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:34.326603   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.931538446s)
	I1209 23:44:34.326653   86928 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.610870307s)
	I1209 23:44:34.326668   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:34.326688   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:34.326947   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:34.326970   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:34.326979   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:34.326987   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:34.326993   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:34.327234   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:34.327254   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:34.327265   86928 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-327804"
	I1209 23:44:34.328138   86928 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1209 23:44:34.329025   86928 out.go:177] * Verifying csi-hostpath-driver addon...
	I1209 23:44:34.330503   86928 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1209 23:44:34.331170   86928 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1209 23:44:34.331642   86928 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1209 23:44:34.331659   86928 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1209 23:44:34.346166   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:34.346475   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:34.346888   86928 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1209 23:44:34.346907   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:34.458396   86928 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1209 23:44:34.458429   86928 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1209 23:44:34.540415   86928 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 23:44:34.540442   86928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1209 23:44:34.616397   86928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1209 23:44:34.628682   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.023075477s)
	I1209 23:44:34.628736   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:34.628754   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:34.629063   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:34.629105   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:34.629129   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:34.629143   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:34.629369   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:34.629384   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:34.629399   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:34.835189   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:34.837120   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:34.844063   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:35.316804   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:35.316978   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:35.336061   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:35.783963   86928 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.167521855s)
	I1209 23:44:35.784022   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:35.784036   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:35.784340   86928 main.go:141] libmachine: (addons-327804) DBG | Closing plugin on server side
	I1209 23:44:35.784386   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:35.784408   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:35.784420   86928 main.go:141] libmachine: Making call to close driver server
	I1209 23:44:35.784428   86928 main.go:141] libmachine: (addons-327804) Calling .Close
	I1209 23:44:35.784674   86928 main.go:141] libmachine: Successfully made call to close driver server
	I1209 23:44:35.784693   86928 main.go:141] libmachine: Making call to close connection to plugin binary
	I1209 23:44:35.786524   86928 addons.go:475] Verifying addon gcp-auth=true in "addons-327804"
	I1209 23:44:35.788919   86928 out.go:177] * Verifying gcp-auth addon...
	I1209 23:44:35.790598   86928 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1209 23:44:35.833053   86928 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1209 23:44:35.833078   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:35.835699   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:35.835839   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:35.840815   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:35.857730   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:36.294506   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:36.314395   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:36.316241   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:36.334919   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:36.795203   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:36.815677   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:36.816056   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:36.834952   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:37.300390   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:37.314307   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:37.316947   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:37.336155   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:37.797041   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:37.814235   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:37.816378   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:37.836224   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:38.295029   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:38.299659   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:38.314846   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:38.316943   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:38.337088   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:38.796253   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:38.817221   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:38.818458   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:38.836118   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:39.294433   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:39.315266   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:39.319143   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:39.336923   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:39.818396   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:39.915978   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:39.917074   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:39.917241   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:40.293606   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:40.313486   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:40.315399   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:40.334757   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:40.795019   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:40.797927   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:40.815293   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:40.815674   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:40.839790   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:41.294520   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:41.315785   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:41.315973   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:41.335759   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:41.793768   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:41.815193   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:41.816549   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:41.834524   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:42.294075   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:42.315060   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:42.315631   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:42.335339   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:42.794914   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:42.800847   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:42.813362   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:42.815211   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:42.834912   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:43.295310   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:43.313823   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:43.315793   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:43.335781   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:43.795567   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:43.814368   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:43.815903   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:43.835103   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:44.294796   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:44.313953   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:44.316079   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:44.335739   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:44.795436   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:44.814759   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:44.815851   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:44.834516   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:45.293779   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:45.297821   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:45.315390   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:45.315802   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:45.335924   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:45.794354   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:45.815468   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:45.815663   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:45.834637   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:46.293680   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:46.314676   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:46.315878   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:46.335462   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:47.374018   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:47.374152   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:47.374236   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:47.374469   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:47.374501   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:47.380267   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:47.380408   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:47.380831   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:47.381546   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:47.794457   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:47.815120   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:47.816191   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:47.834817   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:48.296633   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:48.315665   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:48.316362   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:48.335744   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:48.794694   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:48.814933   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:48.817613   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:48.835965   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:49.294686   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:49.316104   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:49.316605   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:49.336005   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:49.795421   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:49.797853   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:49.814835   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:49.815463   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:49.835379   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:50.294938   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:50.316145   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:50.316312   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:50.335377   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:50.794407   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:50.815357   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:50.815569   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:50.836018   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:51.295492   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:51.315669   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:51.315768   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:51.334595   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:51.795630   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:51.797922   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:51.814426   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:51.815206   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:51.834995   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:52.294795   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:52.314740   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:52.315719   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:52.337208   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:52.795624   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:52.816190   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:52.817491   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:52.835286   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:53.293645   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:53.314842   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:53.316593   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:53.338661   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:54.023219   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:54.023321   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:54.024602   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:54.025403   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:54.025949   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:54.293900   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:54.315352   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:54.316072   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:54.337091   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:54.794592   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:54.814879   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:54.815535   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:54.835942   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:55.294204   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:55.315035   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:55.315573   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:55.336550   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:55.793446   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:55.815777   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:55.816098   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:55.835849   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:56.295920   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:56.298205   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:56.315461   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:56.316332   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:56.334806   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:56.796385   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:56.814769   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:56.815310   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:56.834907   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:57.294887   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:57.314045   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:57.315544   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:57.336350   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:57.796452   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:57.813770   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:57.815822   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:57.836353   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:58.293821   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:58.315365   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:58.315491   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:58.336178   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:58.796608   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:58.797938   86928 pod_ready.go:103] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"False"
	I1209 23:44:58.815498   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:58.815810   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:58.835460   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:59.294946   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:59.315225   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:59.315549   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:59.334994   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:59.795004   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:44:59.798249   86928 pod_ready.go:93] pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace has status "Ready":"True"
	I1209 23:44:59.798269   86928 pod_ready.go:82] duration metric: took 33.006139904s for pod "amd-gpu-device-plugin-pkmlz" in "kube-system" namespace to be "Ready" ...
	I1209 23:44:59.798278   86928 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mv8d4" in "kube-system" namespace to be "Ready" ...
	I1209 23:44:59.800086   86928 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-mv8d4" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-mv8d4" not found
	I1209 23:44:59.800108   86928 pod_ready.go:82] duration metric: took 1.82311ms for pod "coredns-7c65d6cfc9-mv8d4" in "kube-system" namespace to be "Ready" ...
	E1209 23:44:59.800121   86928 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-mv8d4" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-mv8d4" not found
	I1209 23:44:59.800133   86928 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r5t4g" in "kube-system" namespace to be "Ready" ...
	I1209 23:44:59.806876   86928 pod_ready.go:93] pod "coredns-7c65d6cfc9-r5t4g" in "kube-system" namespace has status "Ready":"True"
	I1209 23:44:59.806895   86928 pod_ready.go:82] duration metric: took 6.755668ms for pod "coredns-7c65d6cfc9-r5t4g" in "kube-system" namespace to be "Ready" ...
	I1209 23:44:59.806903   86928 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-327804" in "kube-system" namespace to be "Ready" ...
	I1209 23:44:59.812725   86928 pod_ready.go:93] pod "etcd-addons-327804" in "kube-system" namespace has status "Ready":"True"
	I1209 23:44:59.812748   86928 pod_ready.go:82] duration metric: took 5.837634ms for pod "etcd-addons-327804" in "kube-system" namespace to be "Ready" ...
	I1209 23:44:59.812759   86928 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-327804" in "kube-system" namespace to be "Ready" ...
	I1209 23:44:59.817158   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:44:59.817499   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:44:59.819782   86928 pod_ready.go:93] pod "kube-apiserver-addons-327804" in "kube-system" namespace has status "Ready":"True"
	I1209 23:44:59.819801   86928 pod_ready.go:82] duration metric: took 7.033791ms for pod "kube-apiserver-addons-327804" in "kube-system" namespace to be "Ready" ...
	I1209 23:44:59.819813   86928 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-327804" in "kube-system" namespace to be "Ready" ...
	I1209 23:44:59.834758   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:44:59.996874   86928 pod_ready.go:93] pod "kube-controller-manager-addons-327804" in "kube-system" namespace has status "Ready":"True"
	I1209 23:44:59.996896   86928 pod_ready.go:82] duration metric: took 177.075091ms for pod "kube-controller-manager-addons-327804" in "kube-system" namespace to be "Ready" ...
	I1209 23:44:59.996906   86928 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2cbzc" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:00.295676   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:00.314329   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:00.316534   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:00.337627   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:00.396372   86928 pod_ready.go:93] pod "kube-proxy-2cbzc" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:00.396392   86928 pod_ready.go:82] duration metric: took 399.480869ms for pod "kube-proxy-2cbzc" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:00.396402   86928 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-327804" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:00.795159   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:00.796568   86928 pod_ready.go:93] pod "kube-scheduler-addons-327804" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:00.796588   86928 pod_ready.go:82] duration metric: took 400.179692ms for pod "kube-scheduler-addons-327804" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:00.796598   86928 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4fmgx" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:00.814903   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:00.816724   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:00.835344   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:01.196494   86928 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-4fmgx" in "kube-system" namespace has status "Ready":"True"
	I1209 23:45:01.196520   86928 pod_ready.go:82] duration metric: took 399.915118ms for pod "nvidia-device-plugin-daemonset-4fmgx" in "kube-system" namespace to be "Ready" ...
	I1209 23:45:01.196533   86928 pod_ready.go:39] duration metric: took 34.427764911s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1209 23:45:01.196555   86928 api_server.go:52] waiting for apiserver process to appear ...
	I1209 23:45:01.196619   86928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 23:45:01.216009   86928 api_server.go:72] duration metric: took 37.360157968s to wait for apiserver process to appear ...
	I1209 23:45:01.216037   86928 api_server.go:88] waiting for apiserver healthz status ...
	I1209 23:45:01.216060   86928 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I1209 23:45:01.220831   86928 api_server.go:279] https://192.168.39.22:8443/healthz returned 200:
	ok
	I1209 23:45:01.221900   86928 api_server.go:141] control plane version: v1.31.2
	I1209 23:45:01.221922   86928 api_server.go:131] duration metric: took 5.879405ms to wait for apiserver health ...
	I1209 23:45:01.221951   86928 system_pods.go:43] waiting for kube-system pods to appear ...
	I1209 23:45:01.294011   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:01.315367   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:01.315833   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:01.335097   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:01.403397   86928 system_pods.go:59] 18 kube-system pods found
	I1209 23:45:01.403430   86928 system_pods.go:61] "amd-gpu-device-plugin-pkmlz" [017587ab-2377-4f9e-92e2-218a17992ac4] Running
	I1209 23:45:01.403435   86928 system_pods.go:61] "coredns-7c65d6cfc9-r5t4g" [7a0c206f-316c-4ffb-9211-a965ab776e73] Running
	I1209 23:45:01.403442   86928 system_pods.go:61] "csi-hostpath-attacher-0" [d20aef45-da7a-435c-9074-2b9dc1cd24db] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1209 23:45:01.403448   86928 system_pods.go:61] "csi-hostpath-resizer-0" [23152550-a282-425c-afac-778089918479] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1209 23:45:01.403457   86928 system_pods.go:61] "csi-hostpathplugin-k6r22" [206125d5-90c8-4598-b3aa-f9156187f289] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 23:45:01.403461   86928 system_pods.go:61] "etcd-addons-327804" [d7b1bc10-ad72-4172-8b75-501badde178f] Running
	I1209 23:45:01.403465   86928 system_pods.go:61] "kube-apiserver-addons-327804" [f7a261b7-39ac-450f-842e-dc53e5e91214] Running
	I1209 23:45:01.403468   86928 system_pods.go:61] "kube-controller-manager-addons-327804" [caff5b88-a93a-46f5-9bd1-94d6153a13c8] Running
	I1209 23:45:01.403472   86928 system_pods.go:61] "kube-ingress-dns-minikube" [badf09c8-255f-4cbf-835d-fe1d2cf14471] Running
	I1209 23:45:01.403475   86928 system_pods.go:61] "kube-proxy-2cbzc" [ee54203a-77d6-4367-8ccb-208364419fea] Running
	I1209 23:45:01.403479   86928 system_pods.go:61] "kube-scheduler-addons-327804" [903789aa-d4d6-4348-93c7-55c9823816d6] Running
	I1209 23:45:01.403483   86928 system_pods.go:61] "metrics-server-84c5f94fbc-4d528" [8de05551-49ab-4933-852a-16b88842a109] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 23:45:01.403490   86928 system_pods.go:61] "nvidia-device-plugin-daemonset-4fmgx" [a89eaf64-40a3-4ab2-a394-a852c6a26f53] Running
	I1209 23:45:01.403495   86928 system_pods.go:61] "registry-5cc95cd69-sr6kt" [38920e52-e20a-4542-af24-1efcde928cf7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1209 23:45:01.403500   86928 system_pods.go:61] "registry-proxy-rft2s" [6ff74e8e-3b66-4249-984f-1c881b667876] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 23:45:01.403508   86928 system_pods.go:61] "snapshot-controller-56fcc65765-7ggrn" [2c529bb9-d4dd-41aa-ae16-5fd1853d334c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 23:45:01.403513   86928 system_pods.go:61] "snapshot-controller-56fcc65765-9ssqt" [6b3d1329-f736-4c18-8da6-a2e60b272146] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 23:45:01.403521   86928 system_pods.go:61] "storage-provisioner" [7f8c8e7e-aef5-4f97-8808-537836392fb1] Running
	I1209 23:45:01.403528   86928 system_pods.go:74] duration metric: took 181.564053ms to wait for pod list to return data ...
	I1209 23:45:01.403538   86928 default_sa.go:34] waiting for default service account to be created ...
	I1209 23:45:01.597069   86928 default_sa.go:45] found service account: "default"
	I1209 23:45:01.597100   86928 default_sa.go:55] duration metric: took 193.55531ms for default service account to be created ...
	I1209 23:45:01.597110   86928 system_pods.go:116] waiting for k8s-apps to be running ...
	I1209 23:45:01.794096   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:01.800721   86928 system_pods.go:86] 18 kube-system pods found
	I1209 23:45:01.800745   86928 system_pods.go:89] "amd-gpu-device-plugin-pkmlz" [017587ab-2377-4f9e-92e2-218a17992ac4] Running
	I1209 23:45:01.800751   86928 system_pods.go:89] "coredns-7c65d6cfc9-r5t4g" [7a0c206f-316c-4ffb-9211-a965ab776e73] Running
	I1209 23:45:01.800757   86928 system_pods.go:89] "csi-hostpath-attacher-0" [d20aef45-da7a-435c-9074-2b9dc1cd24db] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1209 23:45:01.800764   86928 system_pods.go:89] "csi-hostpath-resizer-0" [23152550-a282-425c-afac-778089918479] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1209 23:45:01.800771   86928 system_pods.go:89] "csi-hostpathplugin-k6r22" [206125d5-90c8-4598-b3aa-f9156187f289] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1209 23:45:01.800776   86928 system_pods.go:89] "etcd-addons-327804" [d7b1bc10-ad72-4172-8b75-501badde178f] Running
	I1209 23:45:01.800780   86928 system_pods.go:89] "kube-apiserver-addons-327804" [f7a261b7-39ac-450f-842e-dc53e5e91214] Running
	I1209 23:45:01.800783   86928 system_pods.go:89] "kube-controller-manager-addons-327804" [caff5b88-a93a-46f5-9bd1-94d6153a13c8] Running
	I1209 23:45:01.800788   86928 system_pods.go:89] "kube-ingress-dns-minikube" [badf09c8-255f-4cbf-835d-fe1d2cf14471] Running
	I1209 23:45:01.800791   86928 system_pods.go:89] "kube-proxy-2cbzc" [ee54203a-77d6-4367-8ccb-208364419fea] Running
	I1209 23:45:01.800794   86928 system_pods.go:89] "kube-scheduler-addons-327804" [903789aa-d4d6-4348-93c7-55c9823816d6] Running
	I1209 23:45:01.800801   86928 system_pods.go:89] "metrics-server-84c5f94fbc-4d528" [8de05551-49ab-4933-852a-16b88842a109] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1209 23:45:01.800805   86928 system_pods.go:89] "nvidia-device-plugin-daemonset-4fmgx" [a89eaf64-40a3-4ab2-a394-a852c6a26f53] Running
	I1209 23:45:01.800810   86928 system_pods.go:89] "registry-5cc95cd69-sr6kt" [38920e52-e20a-4542-af24-1efcde928cf7] Running
	I1209 23:45:01.800815   86928 system_pods.go:89] "registry-proxy-rft2s" [6ff74e8e-3b66-4249-984f-1c881b667876] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1209 23:45:01.800824   86928 system_pods.go:89] "snapshot-controller-56fcc65765-7ggrn" [2c529bb9-d4dd-41aa-ae16-5fd1853d334c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 23:45:01.800830   86928 system_pods.go:89] "snapshot-controller-56fcc65765-9ssqt" [6b3d1329-f736-4c18-8da6-a2e60b272146] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1209 23:45:01.800834   86928 system_pods.go:89] "storage-provisioner" [7f8c8e7e-aef5-4f97-8808-537836392fb1] Running
	I1209 23:45:01.800842   86928 system_pods.go:126] duration metric: took 203.725819ms to wait for k8s-apps to be running ...
	I1209 23:45:01.800852   86928 system_svc.go:44] waiting for kubelet service to be running ....
	I1209 23:45:01.800896   86928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 23:45:01.814682   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:01.815599   86928 system_svc.go:56] duration metric: took 14.735237ms WaitForService to wait for kubelet
	I1209 23:45:01.815625   86928 kubeadm.go:582] duration metric: took 37.959779657s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 23:45:01.815650   86928 node_conditions.go:102] verifying NodePressure condition ...
	I1209 23:45:01.816510   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:01.834999   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:01.996649   86928 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1209 23:45:01.996681   86928 node_conditions.go:123] node cpu capacity is 2
	I1209 23:45:01.996699   86928 node_conditions.go:105] duration metric: took 181.042355ms to run NodePressure ...
	I1209 23:45:01.996714   86928 start.go:241] waiting for startup goroutines ...
	I1209 23:45:02.299689   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:02.314241   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:02.314875   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:02.335141   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:02.793968   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:02.814653   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:02.814938   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:02.837603   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:03.293934   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:03.315295   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:03.315812   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:03.335557   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:03.794619   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:03.817112   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:03.817522   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:03.837271   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:04.296062   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:04.315519   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:04.317188   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:04.335957   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:04.793996   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:04.815270   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:04.817154   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:04.834971   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:05.294807   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:05.314881   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1209 23:45:05.315089   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:05.334337   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:05.793598   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:05.815868   86928 kapi.go:107] duration metric: took 33.504877747s to wait for kubernetes.io/minikube-addons=registry ...
	I1209 23:45:05.816151   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:05.834902   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:06.295066   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:06.315596   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:06.337679   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:06.796522   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:06.819378   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:06.835618   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:07.294539   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:07.316020   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:07.334800   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:07.795957   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:07.814807   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:07.898177   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:08.294969   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:08.315015   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:08.334677   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:08.794602   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:08.815785   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:08.835449   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:09.294347   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:09.315610   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:09.335913   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:09.794874   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:09.816279   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:09.836602   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:10.293962   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:10.316088   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:10.336950   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:10.794850   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:10.815333   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:10.834812   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:11.293947   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:11.314864   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:11.336551   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:11.793951   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:11.815074   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:11.835169   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:12.294157   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:12.316025   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:12.334999   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:12.793537   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:12.816052   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:12.835220   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:13.294847   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:13.316349   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:13.530869   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:13.794199   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:13.815680   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:13.834887   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:14.294024   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:14.316467   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:14.335312   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:14.796494   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:14.818528   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:14.835913   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:15.315651   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:15.323527   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:15.358961   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:15.797719   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:15.816957   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:15.837099   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:16.295272   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:16.315412   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:16.396362   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:16.794170   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:16.822155   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:16.896707   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:17.293748   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:17.315802   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:17.335214   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:17.793616   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:17.816767   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:17.835654   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:18.295013   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:18.315495   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:18.335520   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:18.794139   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:18.815609   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:18.836865   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:19.294649   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:19.316145   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:19.334809   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:19.794462   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:19.815508   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:19.835295   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:20.295056   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:20.316527   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:20.338047   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:20.806853   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:20.815561   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:20.835205   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:21.294770   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:21.315980   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:21.334663   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:21.794777   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:21.816338   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:21.836230   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:22.294412   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:22.315702   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:22.335447   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:22.794628   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:22.815620   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:22.835361   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:23.293650   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:23.395283   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:23.395329   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:23.793877   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:23.815808   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:23.835409   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:24.293918   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:24.315860   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:24.335412   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:24.793839   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:24.815245   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:24.898032   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:25.294451   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:25.315704   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:25.335455   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:25.793836   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:25.816582   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:25.835669   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:26.628627   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:26.632813   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:26.634013   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:26.794979   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:26.896397   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:26.896487   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:27.293741   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:27.315766   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:27.335760   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:27.794529   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:27.815334   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:27.835041   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:28.293376   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:28.315301   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:28.335265   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:28.794052   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:28.814858   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:28.835666   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:29.294783   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:29.316351   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:29.335060   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:29.794176   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:29.815194   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:29.835926   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:30.298179   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:30.315086   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:30.335676   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:30.795710   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:30.816332   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:30.834980   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:31.295094   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:31.315096   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:31.334846   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:31.794579   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:31.815733   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:31.836372   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:32.294789   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:32.316068   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:32.335169   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:32.794681   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:32.819177   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:32.835923   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:33.294724   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:33.315705   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:33.335150   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:33.794029   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:33.815072   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:33.834873   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:34.294181   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:34.315479   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:34.335970   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:34.794208   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:34.815257   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:34.835318   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:35.295096   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:35.317426   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:35.336908   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:35.794508   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:35.816087   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:35.835432   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:36.294021   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:36.315872   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:36.335684   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:36.794283   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:36.817651   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:36.837393   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1209 23:45:37.295633   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:37.324093   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:37.344997   86928 kapi.go:107] duration metric: took 1m3.013818607s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1209 23:45:37.794111   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:37.815097   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:38.295498   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:38.316112   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:39.021212   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:39.021614   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:39.295872   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:39.316685   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:39.793930   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:39.816192   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:40.297315   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:40.316149   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:40.795086   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:40.817401   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:41.386094   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:41.386351   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:41.793999   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:41.815657   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:42.294345   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:42.315625   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:42.795487   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:42.816258   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:43.295433   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:43.315734   86928 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1209 23:45:43.795264   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:43.816629   86928 kapi.go:107] duration metric: took 1m11.505013998s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1209 23:45:44.294398   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:44.796425   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:45.294346   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:45.794877   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:46.295123   86928 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1209 23:45:46.794849   86928 kapi.go:107] duration metric: took 1m11.004245607s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1209 23:45:46.796475   86928 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-327804 cluster.
	I1209 23:45:46.797718   86928 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1209 23:45:46.798940   86928 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1209 23:45:46.800215   86928 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, metrics-server, amd-gpu-device-plugin, storage-provisioner, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1209 23:45:46.801498   86928 addons.go:510] duration metric: took 1m22.945635939s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns metrics-server amd-gpu-device-plugin storage-provisioner inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1209 23:45:46.801533   86928 start.go:246] waiting for cluster config update ...
	I1209 23:45:46.801550   86928 start.go:255] writing updated cluster config ...
	I1209 23:45:46.801794   86928 ssh_runner.go:195] Run: rm -f paused
	I1209 23:45:46.851079   86928 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1209 23:45:46.852694   86928 out.go:177] * Done! kubectl is now configured to use "addons-327804" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 09 23:52:29 addons-327804 crio[666]: time="2024-12-09 23:52:29.775005121Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788349774985022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fac75a6d-676f-4318-8127-d454a56f8812 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 23:52:29 addons-327804 crio[666]: time="2024-12-09 23:52:29.775410319Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6cf597f2-98b0-47b6-afa1-ba85e8b1bce6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:52:29 addons-327804 crio[666]: time="2024-12-09 23:52:29.775477696Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6cf597f2-98b0-47b6-afa1-ba85e8b1bce6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:52:29 addons-327804 crio[666]: time="2024-12-09 23:52:29.775824059Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4bb58b2009ed01d2340ef8639b7ad53279919933e12787dfd79e3cd1c7432c0c,PodSandboxId:2bad02ebfdcd822650359b23afc7c26484f38015b51b81c2a01e66ac35213866,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733788164823342395,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bc72w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9388cfab-df21-4794-9e5f-bfb3d41b1b70,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:276e11c8f6782e380c0f486412c268839f8233a540e9b2d467396ac652bf4a47,PodSandboxId:1668e155efb26586b8750b1e2ba60d8222c62672828d561f3dfd47f301131591,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733788026565376632,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 52fd3c65-4d51-4779-8a7a-3c2bcae19f57,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0abf32bbf73a9a16d1843e074dfd0c7e9b3b75c0cdfbda53d3f27c3896034112,PodSandboxId:b308b70e914fa946f03d1ed30379ad4cb26beb6132a443225eae71281957ff6a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733787950077364680,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c2cba33-a47e-457a-a
491-52d554257a4e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47f74f33f56dd8f631e8b7c34e226d3a390572b438ecd5317752269f2b712956,PodSandboxId:4b615de9a52a5bd05ad93fde9178c4d50b960c7c680cef54ca44dd821afca585,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733787916333196322,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4d528,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 8de05551-49ab-4933-852a-16b88842a109,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f092f3706f9b2467e609b06256d3f4b093f0d58624e3a44eb7a493316bfd49b,PodSandboxId:b0d0cf3c6c6d71aef972962da305b287bb681c9cca0fa0bc38a17ad55fc96adc,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733787907495254837,Labels:map[string]strin
g{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-zwvjn,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3bfe6e8f-f3f4-41af-b636-360335e84680,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1e0ef4d0d0ab2293f66afe89c73d4d2885098538a0d3f2291228119b05e0ef,PodSandboxId:464b08afec1f7c2828afe1d7006233cf00a024bfa2271fe41226e10c1a6d1b27,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_R
UNNING,CreatedAt:1733787898767395853,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pkmlz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 017587ab-2377-4f9e-92e2-218a17992ac4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:477d0ec756e0a69860c40ffccc988a97d8bb376ef42c03ae36978781110b8bc3,PodSandboxId:6b6250eeaa11fd27ac90ace35489a6e879d9a27d23c697133c0b4d2100f754df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUN
NING,CreatedAt:1733787870047044732,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8c8e7e-aef5-4f97-8808-537836392fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e092c5623388adf51a509866b9b3eb75beb44708feeed26e6c48bebab630f293,PodSandboxId:142b2695e8e20ba3a81a8b11d079289fadeca3e70b91d93bd87beee09a786858,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:173378786
7553987881,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r5t4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0c206f-316c-4ffb-9211-a965ab776e73,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c12f7a2107cd71062a09726bd3d76c5f44f89c88e8a790971efc9478227e5c4,PodSandboxId:1b8cea9b8d2c3a7150c3d02d442ba36053545bda7c03fb9d8d49b83a88fb1637,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47
de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733787865085046141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2cbzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee54203a-77d6-4367-8ccb-208364419fea,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d77a9f595d88a18bedce788dd2a66f58c67b3954f5dbd5ee5343fbecea91cc1,PodSandboxId:d2dfcc30b6ae4dc7bfbf6684d3099dcce8a0a8dd269cf23dae23630762e06eb4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733787853899008999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b824167c258264e67ae998070ea377e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273b5817c8ec56556f17299edd5c8b59d06c0b65fc84c875c837d9afc2dfa8cb,PodSandboxId:b177a2183d7b3a2d09b7f2101dd94f4860db7c15af5da024a07e4f1d7e485878,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733787853887280251,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f6d6360675bee157f47d84a79c68be5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6063e15fb3524d96dc672d687d4ca98c38148f7ee9be778dc199f39e6c8d3272,PodSandboxId:1f3ad20dfd95fd6970be0e1821c8c4406fc7974f3e36acb6fb9eaac28abde1ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733787853901231415,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a22f91c377887de075e795aacdcfeb14,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b886b264255fd9fe80ac49b1aca8ef8200cdaa6c5c343eb5887ac2f9f67978bf,PodSandboxId:5c712051047779633d5ea786900384f55c87e676c0621e153cc1fa2642df587f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Im
ageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733787853893272511,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7898fb83e756cb65e3a9035b190f7aee,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6cf597f2-98b0-47b6-afa1-ba85e8b1bce6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:52:29 addons-327804 crio[666]: time="2024-12-09 23:52:29.807944527Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=61a1756b-f085-4caa-acf5-372d44a7de10 name=/runtime.v1.RuntimeService/Version
	Dec 09 23:52:29 addons-327804 crio[666]: time="2024-12-09 23:52:29.808011343Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=61a1756b-f085-4caa-acf5-372d44a7de10 name=/runtime.v1.RuntimeService/Version
	Dec 09 23:52:29 addons-327804 crio[666]: time="2024-12-09 23:52:29.809155760Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9c0119a-6e2a-40d4-a441-1eec307ca106 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 23:52:29 addons-327804 crio[666]: time="2024-12-09 23:52:29.810532556Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788349810510474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9c0119a-6e2a-40d4-a441-1eec307ca106 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 23:52:29 addons-327804 crio[666]: time="2024-12-09 23:52:29.811226951Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad5887ef-c12a-47f9-a92b-09be6670736b name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:52:29 addons-327804 crio[666]: time="2024-12-09 23:52:29.811290916Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad5887ef-c12a-47f9-a92b-09be6670736b name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:52:29 addons-327804 crio[666]: time="2024-12-09 23:52:29.811546683Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4bb58b2009ed01d2340ef8639b7ad53279919933e12787dfd79e3cd1c7432c0c,PodSandboxId:2bad02ebfdcd822650359b23afc7c26484f38015b51b81c2a01e66ac35213866,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733788164823342395,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bc72w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9388cfab-df21-4794-9e5f-bfb3d41b1b70,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:276e11c8f6782e380c0f486412c268839f8233a540e9b2d467396ac652bf4a47,PodSandboxId:1668e155efb26586b8750b1e2ba60d8222c62672828d561f3dfd47f301131591,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733788026565376632,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 52fd3c65-4d51-4779-8a7a-3c2bcae19f57,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0abf32bbf73a9a16d1843e074dfd0c7e9b3b75c0cdfbda53d3f27c3896034112,PodSandboxId:b308b70e914fa946f03d1ed30379ad4cb26beb6132a443225eae71281957ff6a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733787950077364680,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c2cba33-a47e-457a-a
491-52d554257a4e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47f74f33f56dd8f631e8b7c34e226d3a390572b438ecd5317752269f2b712956,PodSandboxId:4b615de9a52a5bd05ad93fde9178c4d50b960c7c680cef54ca44dd821afca585,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733787916333196322,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4d528,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 8de05551-49ab-4933-852a-16b88842a109,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f092f3706f9b2467e609b06256d3f4b093f0d58624e3a44eb7a493316bfd49b,PodSandboxId:b0d0cf3c6c6d71aef972962da305b287bb681c9cca0fa0bc38a17ad55fc96adc,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733787907495254837,Labels:map[string]strin
g{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-zwvjn,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3bfe6e8f-f3f4-41af-b636-360335e84680,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1e0ef4d0d0ab2293f66afe89c73d4d2885098538a0d3f2291228119b05e0ef,PodSandboxId:464b08afec1f7c2828afe1d7006233cf00a024bfa2271fe41226e10c1a6d1b27,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_R
UNNING,CreatedAt:1733787898767395853,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pkmlz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 017587ab-2377-4f9e-92e2-218a17992ac4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:477d0ec756e0a69860c40ffccc988a97d8bb376ef42c03ae36978781110b8bc3,PodSandboxId:6b6250eeaa11fd27ac90ace35489a6e879d9a27d23c697133c0b4d2100f754df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUN
NING,CreatedAt:1733787870047044732,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8c8e7e-aef5-4f97-8808-537836392fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e092c5623388adf51a509866b9b3eb75beb44708feeed26e6c48bebab630f293,PodSandboxId:142b2695e8e20ba3a81a8b11d079289fadeca3e70b91d93bd87beee09a786858,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:173378786
7553987881,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r5t4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0c206f-316c-4ffb-9211-a965ab776e73,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c12f7a2107cd71062a09726bd3d76c5f44f89c88e8a790971efc9478227e5c4,PodSandboxId:1b8cea9b8d2c3a7150c3d02d442ba36053545bda7c03fb9d8d49b83a88fb1637,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47
de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733787865085046141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2cbzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee54203a-77d6-4367-8ccb-208364419fea,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d77a9f595d88a18bedce788dd2a66f58c67b3954f5dbd5ee5343fbecea91cc1,PodSandboxId:d2dfcc30b6ae4dc7bfbf6684d3099dcce8a0a8dd269cf23dae23630762e06eb4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733787853899008999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b824167c258264e67ae998070ea377e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273b5817c8ec56556f17299edd5c8b59d06c0b65fc84c875c837d9afc2dfa8cb,PodSandboxId:b177a2183d7b3a2d09b7f2101dd94f4860db7c15af5da024a07e4f1d7e485878,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733787853887280251,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f6d6360675bee157f47d84a79c68be5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6063e15fb3524d96dc672d687d4ca98c38148f7ee9be778dc199f39e6c8d3272,PodSandboxId:1f3ad20dfd95fd6970be0e1821c8c4406fc7974f3e36acb6fb9eaac28abde1ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733787853901231415,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a22f91c377887de075e795aacdcfeb14,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b886b264255fd9fe80ac49b1aca8ef8200cdaa6c5c343eb5887ac2f9f67978bf,PodSandboxId:5c712051047779633d5ea786900384f55c87e676c0621e153cc1fa2642df587f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Im
ageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733787853893272511,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7898fb83e756cb65e3a9035b190f7aee,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ad5887ef-c12a-47f9-a92b-09be6670736b name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:52:29 addons-327804 crio[666]: time="2024-12-09 23:52:29.844590170Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ed9769af-81af-4f6a-bafe-39449b1e90b7 name=/runtime.v1.RuntimeService/Version
	Dec 09 23:52:29 addons-327804 crio[666]: time="2024-12-09 23:52:29.844645734Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ed9769af-81af-4f6a-bafe-39449b1e90b7 name=/runtime.v1.RuntimeService/Version
	Dec 09 23:52:29 addons-327804 crio[666]: time="2024-12-09 23:52:29.845963218Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7d38b055-07ce-43dc-8982-743cb808052a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 23:52:29 addons-327804 crio[666]: time="2024-12-09 23:52:29.847210295Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788349847187913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7d38b055-07ce-43dc-8982-743cb808052a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 23:52:29 addons-327804 crio[666]: time="2024-12-09 23:52:29.847809586Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95d006cc-5f18-4e52-b78b-b5c129c0ca46 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:52:29 addons-327804 crio[666]: time="2024-12-09 23:52:29.847878962Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95d006cc-5f18-4e52-b78b-b5c129c0ca46 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:52:29 addons-327804 crio[666]: time="2024-12-09 23:52:29.848169885Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4bb58b2009ed01d2340ef8639b7ad53279919933e12787dfd79e3cd1c7432c0c,PodSandboxId:2bad02ebfdcd822650359b23afc7c26484f38015b51b81c2a01e66ac35213866,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733788164823342395,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bc72w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9388cfab-df21-4794-9e5f-bfb3d41b1b70,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:276e11c8f6782e380c0f486412c268839f8233a540e9b2d467396ac652bf4a47,PodSandboxId:1668e155efb26586b8750b1e2ba60d8222c62672828d561f3dfd47f301131591,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733788026565376632,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 52fd3c65-4d51-4779-8a7a-3c2bcae19f57,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0abf32bbf73a9a16d1843e074dfd0c7e9b3b75c0cdfbda53d3f27c3896034112,PodSandboxId:b308b70e914fa946f03d1ed30379ad4cb26beb6132a443225eae71281957ff6a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733787950077364680,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c2cba33-a47e-457a-a
491-52d554257a4e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47f74f33f56dd8f631e8b7c34e226d3a390572b438ecd5317752269f2b712956,PodSandboxId:4b615de9a52a5bd05ad93fde9178c4d50b960c7c680cef54ca44dd821afca585,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733787916333196322,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4d528,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 8de05551-49ab-4933-852a-16b88842a109,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f092f3706f9b2467e609b06256d3f4b093f0d58624e3a44eb7a493316bfd49b,PodSandboxId:b0d0cf3c6c6d71aef972962da305b287bb681c9cca0fa0bc38a17ad55fc96adc,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733787907495254837,Labels:map[string]strin
g{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-zwvjn,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3bfe6e8f-f3f4-41af-b636-360335e84680,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1e0ef4d0d0ab2293f66afe89c73d4d2885098538a0d3f2291228119b05e0ef,PodSandboxId:464b08afec1f7c2828afe1d7006233cf00a024bfa2271fe41226e10c1a6d1b27,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_R
UNNING,CreatedAt:1733787898767395853,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pkmlz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 017587ab-2377-4f9e-92e2-218a17992ac4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:477d0ec756e0a69860c40ffccc988a97d8bb376ef42c03ae36978781110b8bc3,PodSandboxId:6b6250eeaa11fd27ac90ace35489a6e879d9a27d23c697133c0b4d2100f754df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUN
NING,CreatedAt:1733787870047044732,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8c8e7e-aef5-4f97-8808-537836392fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e092c5623388adf51a509866b9b3eb75beb44708feeed26e6c48bebab630f293,PodSandboxId:142b2695e8e20ba3a81a8b11d079289fadeca3e70b91d93bd87beee09a786858,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:173378786
7553987881,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r5t4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0c206f-316c-4ffb-9211-a965ab776e73,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c12f7a2107cd71062a09726bd3d76c5f44f89c88e8a790971efc9478227e5c4,PodSandboxId:1b8cea9b8d2c3a7150c3d02d442ba36053545bda7c03fb9d8d49b83a88fb1637,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47
de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733787865085046141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2cbzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee54203a-77d6-4367-8ccb-208364419fea,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d77a9f595d88a18bedce788dd2a66f58c67b3954f5dbd5ee5343fbecea91cc1,PodSandboxId:d2dfcc30b6ae4dc7bfbf6684d3099dcce8a0a8dd269cf23dae23630762e06eb4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733787853899008999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b824167c258264e67ae998070ea377e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273b5817c8ec56556f17299edd5c8b59d06c0b65fc84c875c837d9afc2dfa8cb,PodSandboxId:b177a2183d7b3a2d09b7f2101dd94f4860db7c15af5da024a07e4f1d7e485878,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733787853887280251,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f6d6360675bee157f47d84a79c68be5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6063e15fb3524d96dc672d687d4ca98c38148f7ee9be778dc199f39e6c8d3272,PodSandboxId:1f3ad20dfd95fd6970be0e1821c8c4406fc7974f3e36acb6fb9eaac28abde1ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733787853901231415,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a22f91c377887de075e795aacdcfeb14,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b886b264255fd9fe80ac49b1aca8ef8200cdaa6c5c343eb5887ac2f9f67978bf,PodSandboxId:5c712051047779633d5ea786900384f55c87e676c0621e153cc1fa2642df587f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Im
ageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733787853893272511,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7898fb83e756cb65e3a9035b190f7aee,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=95d006cc-5f18-4e52-b78b-b5c129c0ca46 name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:52:29 addons-327804 crio[666]: time="2024-12-09 23:52:29.876008632Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=887cfc90-fd6e-4a8f-a7e3-bace040333b0 name=/runtime.v1.RuntimeService/Version
	Dec 09 23:52:29 addons-327804 crio[666]: time="2024-12-09 23:52:29.876065731Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=887cfc90-fd6e-4a8f-a7e3-bace040333b0 name=/runtime.v1.RuntimeService/Version
	Dec 09 23:52:29 addons-327804 crio[666]: time="2024-12-09 23:52:29.877059930Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2238479c-1222-4e21-b380-fa10b58926e6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 23:52:29 addons-327804 crio[666]: time="2024-12-09 23:52:29.878332281Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788349878312662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2238479c-1222-4e21-b380-fa10b58926e6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 09 23:52:29 addons-327804 crio[666]: time="2024-12-09 23:52:29.878818923Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82f3e65d-17fb-4e86-b98c-201c4886e64e name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:52:29 addons-327804 crio[666]: time="2024-12-09 23:52:29.878877020Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82f3e65d-17fb-4e86-b98c-201c4886e64e name=/runtime.v1.RuntimeService/ListContainers
	Dec 09 23:52:29 addons-327804 crio[666]: time="2024-12-09 23:52:29.879153869Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4bb58b2009ed01d2340ef8639b7ad53279919933e12787dfd79e3cd1c7432c0c,PodSandboxId:2bad02ebfdcd822650359b23afc7c26484f38015b51b81c2a01e66ac35213866,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733788164823342395,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bc72w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9388cfab-df21-4794-9e5f-bfb3d41b1b70,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:276e11c8f6782e380c0f486412c268839f8233a540e9b2d467396ac652bf4a47,PodSandboxId:1668e155efb26586b8750b1e2ba60d8222c62672828d561f3dfd47f301131591,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733788026565376632,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 52fd3c65-4d51-4779-8a7a-3c2bcae19f57,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0abf32bbf73a9a16d1843e074dfd0c7e9b3b75c0cdfbda53d3f27c3896034112,PodSandboxId:b308b70e914fa946f03d1ed30379ad4cb26beb6132a443225eae71281957ff6a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733787950077364680,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9c2cba33-a47e-457a-a
491-52d554257a4e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47f74f33f56dd8f631e8b7c34e226d3a390572b438ecd5317752269f2b712956,PodSandboxId:4b615de9a52a5bd05ad93fde9178c4d50b960c7c680cef54ca44dd821afca585,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733787916333196322,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4d528,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 8de05551-49ab-4933-852a-16b88842a109,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f092f3706f9b2467e609b06256d3f4b093f0d58624e3a44eb7a493316bfd49b,PodSandboxId:b0d0cf3c6c6d71aef972962da305b287bb681c9cca0fa0bc38a17ad55fc96adc,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733787907495254837,Labels:map[string]strin
g{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-zwvjn,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3bfe6e8f-f3f4-41af-b636-360335e84680,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa1e0ef4d0d0ab2293f66afe89c73d4d2885098538a0d3f2291228119b05e0ef,PodSandboxId:464b08afec1f7c2828afe1d7006233cf00a024bfa2271fe41226e10c1a6d1b27,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_R
UNNING,CreatedAt:1733787898767395853,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pkmlz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 017587ab-2377-4f9e-92e2-218a17992ac4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:477d0ec756e0a69860c40ffccc988a97d8bb376ef42c03ae36978781110b8bc3,PodSandboxId:6b6250eeaa11fd27ac90ace35489a6e879d9a27d23c697133c0b4d2100f754df,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUN
NING,CreatedAt:1733787870047044732,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f8c8e7e-aef5-4f97-8808-537836392fb1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e092c5623388adf51a509866b9b3eb75beb44708feeed26e6c48bebab630f293,PodSandboxId:142b2695e8e20ba3a81a8b11d079289fadeca3e70b91d93bd87beee09a786858,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:173378786
7553987881,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r5t4g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a0c206f-316c-4ffb-9211-a965ab776e73,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c12f7a2107cd71062a09726bd3d76c5f44f89c88e8a790971efc9478227e5c4,PodSandboxId:1b8cea9b8d2c3a7150c3d02d442ba36053545bda7c03fb9d8d49b83a88fb1637,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47
de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733787865085046141,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2cbzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee54203a-77d6-4367-8ccb-208364419fea,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d77a9f595d88a18bedce788dd2a66f58c67b3954f5dbd5ee5343fbecea91cc1,PodSandboxId:d2dfcc30b6ae4dc7bfbf6684d3099dcce8a0a8dd269cf23dae23630762e06eb4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733787853899008999,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b824167c258264e67ae998070ea377e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273b5817c8ec56556f17299edd5c8b59d06c0b65fc84c875c837d9afc2dfa8cb,PodSandboxId:b177a2183d7b3a2d09b7f2101dd94f4860db7c15af5da024a07e4f1d7e485878,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733787853887280251,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f6d6360675bee157f47d84a79c68be5,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6063e15fb3524d96dc672d687d4ca98c38148f7ee9be778dc199f39e6c8d3272,PodSandboxId:1f3ad20dfd95fd6970be0e1821c8c4406fc7974f3e36acb6fb9eaac28abde1ab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733787853901231415,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a22f91c377887de075e795aacdcfeb14,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b886b264255fd9fe80ac49b1aca8ef8200cdaa6c5c343eb5887ac2f9f67978bf,PodSandboxId:5c712051047779633d5ea786900384f55c87e676c0621e153cc1fa2642df587f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Im
ageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733787853893272511,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-327804,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7898fb83e756cb65e3a9035b190f7aee,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82f3e65d-17fb-4e86-b98c-201c4886e64e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4bb58b2009ed0       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   2bad02ebfdcd8       hello-world-app-55bf9c44b4-bc72w
	276e11c8f6782       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                         5 minutes ago       Running             nginx                     0                   1668e155efb26       nginx
	0abf32bbf73a9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   b308b70e914fa       busybox
	47f74f33f56dd       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   7 minutes ago       Running             metrics-server            0                   4b615de9a52a5       metrics-server-84c5f94fbc-4d528
	0f092f3706f9b       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        7 minutes ago       Running             local-path-provisioner    0                   b0d0cf3c6c6d7       local-path-provisioner-86d989889c-zwvjn
	aa1e0ef4d0d0a       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                7 minutes ago       Running             amd-gpu-device-plugin     0                   464b08afec1f7       amd-gpu-device-plugin-pkmlz
	477d0ec756e0a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   6b6250eeaa11f       storage-provisioner
	e092c5623388a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        8 minutes ago       Running             coredns                   0                   142b2695e8e20       coredns-7c65d6cfc9-r5t4g
	4c12f7a2107cd       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                        8 minutes ago       Running             kube-proxy                0                   1b8cea9b8d2c3       kube-proxy-2cbzc
	6063e15fb3524       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                        8 minutes ago       Running             kube-controller-manager   0                   1f3ad20dfd95f       kube-controller-manager-addons-327804
	1d77a9f595d88       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        8 minutes ago       Running             etcd                      0                   d2dfcc30b6ae4       etcd-addons-327804
	b886b264255fd       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                        8 minutes ago       Running             kube-apiserver            0                   5c71205104777       kube-apiserver-addons-327804
	273b5817c8ec5       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                        8 minutes ago       Running             kube-scheduler            0                   b177a2183d7b3       kube-scheduler-addons-327804
	
	
	==> coredns [e092c5623388adf51a509866b9b3eb75beb44708feeed26e6c48bebab630f293] <==
	[INFO] 10.244.0.22:41424 - 15798 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000085384s
	[INFO] 10.244.0.22:39186 - 64672 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000067621s
	[INFO] 10.244.0.22:41424 - 27457 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069492s
	[INFO] 10.244.0.22:41424 - 63880 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061855s
	[INFO] 10.244.0.22:39186 - 51472 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000093986s
	[INFO] 10.244.0.22:41424 - 23562 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000141261s
	[INFO] 10.244.0.22:39186 - 60605 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000072886s
	[INFO] 10.244.0.22:39186 - 51574 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000050901s
	[INFO] 10.244.0.22:39186 - 14360 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000064179s
	[INFO] 10.244.0.22:39186 - 51526 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000097843s
	[INFO] 10.244.0.22:39186 - 41181 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000119297s
	[INFO] 10.244.0.22:36687 - 31231 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000115729s
	[INFO] 10.244.0.22:45696 - 8210 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000157159s
	[INFO] 10.244.0.22:36687 - 13108 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000119434s
	[INFO] 10.244.0.22:36687 - 34485 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000064245s
	[INFO] 10.244.0.22:45696 - 17168 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000091357s
	[INFO] 10.244.0.22:36687 - 37143 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000068244s
	[INFO] 10.244.0.22:45696 - 49346 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000075781s
	[INFO] 10.244.0.22:36687 - 55088 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000049194s
	[INFO] 10.244.0.22:45696 - 10721 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000077199s
	[INFO] 10.244.0.22:36687 - 32085 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041882s
	[INFO] 10.244.0.22:45696 - 28934 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000035352s
	[INFO] 10.244.0.22:36687 - 31147 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000043649s
	[INFO] 10.244.0.22:45696 - 42945 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000087029s
	[INFO] 10.244.0.22:45696 - 60495 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000045728s
	
	
	==> describe nodes <==
	Name:               addons-327804
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-327804
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=addons-327804
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T23_44_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-327804
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 23:44:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-327804
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 23:52:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 23:49:56 +0000   Mon, 09 Dec 2024 23:44:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 23:49:56 +0000   Mon, 09 Dec 2024 23:44:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 23:49:56 +0000   Mon, 09 Dec 2024 23:44:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 23:49:56 +0000   Mon, 09 Dec 2024 23:44:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    addons-327804
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 a88e00e239984a4881f4ee141420868c
	  System UUID:                a88e00e2-3998-4a48-81f4-ee141420868c
	  Boot ID:                    5ecd71d7-fc05-46ad-bf4f-2a572fc8b0b9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m43s
	  default                     hello-world-app-55bf9c44b4-bc72w           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 amd-gpu-device-plugin-pkmlz                0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m4s
	  kube-system                 coredns-7c65d6cfc9-r5t4g                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m6s
	  kube-system                 etcd-addons-327804                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m11s
	  kube-system                 kube-apiserver-addons-327804               250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 kube-controller-manager-addons-327804      200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 kube-proxy-2cbzc                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m6s
	  kube-system                 kube-scheduler-addons-327804               100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 metrics-server-84c5f94fbc-4d528            100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         8m1s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  local-path-storage          local-path-provisioner-86d989889c-zwvjn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m4s   kube-proxy       
	  Normal  Starting                 8m12s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m12s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m11s  kubelet          Node addons-327804 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m11s  kubelet          Node addons-327804 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m11s  kubelet          Node addons-327804 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m10s  kubelet          Node addons-327804 status is now: NodeReady
	  Normal  RegisteredNode           8m7s   node-controller  Node addons-327804 event: Registered Node addons-327804 in Controller
	
	
	==> dmesg <==
	[  +5.231975] systemd-fstab-generator[1327]: Ignoring "noauto" option for root device
	[  +0.147892] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.031426] kauditd_printk_skb: 140 callbacks suppressed
	[  +5.143707] kauditd_printk_skb: 127 callbacks suppressed
	[  +5.455973] kauditd_printk_skb: 64 callbacks suppressed
	[ +11.408867] kauditd_printk_skb: 5 callbacks suppressed
	[Dec 9 23:45] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.096501] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.890482] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.322026] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.387412] kauditd_printk_skb: 42 callbacks suppressed
	[  +8.508099] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.309521] kauditd_printk_skb: 13 callbacks suppressed
	[ +10.461136] kauditd_printk_skb: 14 callbacks suppressed
	[Dec 9 23:46] kauditd_printk_skb: 6 callbacks suppressed
	[ +12.602918] kauditd_printk_skb: 6 callbacks suppressed
	[ +11.566614] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.246945] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.889739] kauditd_printk_skb: 32 callbacks suppressed
	[Dec 9 23:47] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.050351] kauditd_printk_skb: 51 callbacks suppressed
	[ +10.776318] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.856995] kauditd_printk_skb: 7 callbacks suppressed
	[Dec 9 23:49] kauditd_printk_skb: 49 callbacks suppressed
	[  +6.760827] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [1d77a9f595d88a18bedce788dd2a66f58c67b3954f5dbd5ee5343fbecea91cc1] <==
	{"level":"info","ts":"2024-12-09T23:45:39.000134Z","caller":"traceutil/trace.go:171","msg":"trace[1563361563] transaction","detail":"{read_only:false; response_revision:1074; number_of_response:1; }","duration":"319.644022ms","start":"2024-12-09T23:45:38.680474Z","end":"2024-12-09T23:45:39.000118Z","steps":["trace[1563361563] 'process raft request'  (duration: 319.554765ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:45:39.000342Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T23:45:38.680460Z","time spent":"319.733217ms","remote":"127.0.0.1:51388","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1064 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-12-09T23:45:39.000659Z","caller":"traceutil/trace.go:171","msg":"trace[52526850] linearizableReadLoop","detail":"{readStateIndex:1107; appliedIndex:1107; }","duration":"220.218029ms","start":"2024-12-09T23:45:38.780432Z","end":"2024-12-09T23:45:39.000650Z","steps":["trace[52526850] 'read index received'  (duration: 220.215262ms)","trace[52526850] 'applied index is now lower than readState.Index'  (duration: 2.184µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-09T23:45:39.000778Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.296435ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T23:45:39.000801Z","caller":"traceutil/trace.go:171","msg":"trace[312828656] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1074; }","duration":"220.367395ms","start":"2024-12-09T23:45:38.780428Z","end":"2024-12-09T23:45:39.000795Z","steps":["trace[312828656] 'agreement among raft nodes before linearized reading'  (duration: 220.264249ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:45:39.001100Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.42476ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T23:45:39.001132Z","caller":"traceutil/trace.go:171","msg":"trace[1960441612] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1075; }","duration":"199.462473ms","start":"2024-12-09T23:45:38.801663Z","end":"2024-12-09T23:45:39.001125Z","steps":["trace[1960441612] 'agreement among raft nodes before linearized reading'  (duration: 199.391595ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:45:41.364219Z","caller":"traceutil/trace.go:171","msg":"trace[191935509] transaction","detail":"{read_only:false; response_revision:1083; number_of_response:1; }","duration":"356.874214ms","start":"2024-12-09T23:45:41.007331Z","end":"2024-12-09T23:45:41.364205Z","steps":["trace[191935509] 'process raft request'  (duration: 356.740068ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:45:41.364396Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T23:45:41.007314Z","time spent":"357.017477ms","remote":"127.0.0.1:51388","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1074 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-12-09T23:45:41.364860Z","caller":"traceutil/trace.go:171","msg":"trace[708113448] linearizableReadLoop","detail":"{readStateIndex:1116; appliedIndex:1116; }","duration":"271.885884ms","start":"2024-12-09T23:45:41.092964Z","end":"2024-12-09T23:45:41.364850Z","steps":["trace[708113448] 'read index received'  (duration: 271.882942ms)","trace[708113448] 'applied index is now lower than readState.Index'  (duration: 2.484µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-09T23:45:41.364996Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.021354ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T23:45:41.365052Z","caller":"traceutil/trace.go:171","msg":"trace[940995054] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1083; }","duration":"272.085806ms","start":"2024-12-09T23:45:41.092960Z","end":"2024-12-09T23:45:41.365046Z","steps":["trace[940995054] 'agreement among raft nodes before linearized reading'  (duration: 272.004677ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:45:41.368074Z","caller":"traceutil/trace.go:171","msg":"trace[1489611023] transaction","detail":"{read_only:false; response_revision:1084; number_of_response:1; }","duration":"257.692898ms","start":"2024-12-09T23:45:41.110370Z","end":"2024-12-09T23:45:41.368063Z","steps":["trace[1489611023] 'process raft request'  (duration: 257.536026ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:45:41.368330Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.11973ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-12-09T23:45:41.368378Z","caller":"traceutil/trace.go:171","msg":"trace[1600388161] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:1084; }","duration":"232.166903ms","start":"2024-12-09T23:45:41.136198Z","end":"2024-12-09T23:45:41.368365Z","steps":["trace[1600388161] 'agreement among raft nodes before linearized reading'  (duration: 232.109152ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:46:17.705801Z","caller":"traceutil/trace.go:171","msg":"trace[1992682220] linearizableReadLoop","detail":"{readStateIndex:1287; appliedIndex:1286; }","duration":"109.533231ms","start":"2024-12-09T23:46:17.596199Z","end":"2024-12-09T23:46:17.705732Z","steps":["trace[1992682220] 'read index received'  (duration: 109.316689ms)","trace[1992682220] 'applied index is now lower than readState.Index'  (duration: 215.822µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-09T23:46:17.705976Z","caller":"traceutil/trace.go:171","msg":"trace[242282482] transaction","detail":"{read_only:false; response_revision:1246; number_of_response:1; }","duration":"125.960265ms","start":"2024-12-09T23:46:17.579998Z","end":"2024-12-09T23:46:17.705959Z","steps":["trace[242282482] 'process raft request'  (duration: 125.560564ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:46:17.706078Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.885935ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T23:46:17.706139Z","caller":"traceutil/trace.go:171","msg":"trace[1713520814] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1246; }","duration":"109.956672ms","start":"2024-12-09T23:46:17.596173Z","end":"2024-12-09T23:46:17.706130Z","steps":["trace[1713520814] 'agreement among raft nodes before linearized reading'  (duration: 109.817238ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-09T23:46:32.679940Z","caller":"traceutil/trace.go:171","msg":"trace[1539081306] transaction","detail":"{read_only:false; response_revision:1307; number_of_response:1; }","duration":"360.10058ms","start":"2024-12-09T23:46:32.319823Z","end":"2024-12-09T23:46:32.679924Z","steps":["trace[1539081306] 'process raft request'  (duration: 359.743223ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:46:32.680244Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-09T23:46:32.319803Z","time spent":"360.294711ms","remote":"127.0.0.1:51492","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-327804\" mod_revision:1264 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-327804\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-327804\" > >"}
	{"level":"warn","ts":"2024-12-09T23:46:48.916144Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"320.513091ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T23:46:48.916268Z","caller":"traceutil/trace.go:171","msg":"trace[1542012280] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1409; }","duration":"320.655307ms","start":"2024-12-09T23:46:48.595595Z","end":"2024-12-09T23:46:48.916250Z","steps":["trace[1542012280] 'range keys from in-memory index tree'  (duration: 320.495445ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-09T23:46:48.916269Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"262.83623ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-09T23:46:48.916313Z","caller":"traceutil/trace.go:171","msg":"trace[312283620] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1409; }","duration":"262.932259ms","start":"2024-12-09T23:46:48.653372Z","end":"2024-12-09T23:46:48.916304Z","steps":["trace[312283620] 'range keys from in-memory index tree'  (duration: 262.790265ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:52:30 up 8 min,  0 users,  load average: 0.02, 0.44, 0.37
	Linux addons-327804 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b886b264255fd9fe80ac49b1aca8ef8200cdaa6c5c343eb5887ac2f9f67978bf] <==
	 > logger="UnhandledError"
	E1209 23:46:20.191586       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.81.58:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.81.58:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.81.58:443: connect: connection refused" logger="UnhandledError"
	E1209 23:46:20.193224       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.81.58:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.81.58:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.81.58:443: connect: connection refused" logger="UnhandledError"
	E1209 23:46:20.199280       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.81.58:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.81.58:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.81.58:443: connect: connection refused" logger="UnhandledError"
	I1209 23:46:20.279030       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1209 23:46:27.688073       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.240.248"}
	I1209 23:46:58.300619       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1209 23:47:01.981025       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1209 23:47:02.154044       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.228.194"}
	I1209 23:47:08.029300       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1209 23:47:09.164023       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1209 23:47:25.690074       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:47:25.690128       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 23:47:25.719938       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:47:25.720037       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 23:47:25.766293       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:47:25.766388       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 23:47:25.834865       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:47:25.835018       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1209 23:47:25.878524       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1209 23:47:25.878571       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1209 23:47:26.835478       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1209 23:47:26.878469       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1209 23:47:26.898842       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1209 23:49:22.366967       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.162.113"}
	
	
	==> kube-controller-manager [6063e15fb3524d96dc672d687d4ca98c38148f7ee9be778dc199f39e6c8d3272] <==
	E1209 23:50:10.183033       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:50:16.727599       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:50:16.727711       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:50:44.898359       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:50:44.898629       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:50:50.853076       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:50:50.853227       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:50:54.112673       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:50:54.112831       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:51:13.245901       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:51:13.246040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:51:22.637319       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:51:22.637358       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:51:28.180624       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:51:28.180683       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:51:31.598418       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:51:31.598540       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:52:03.495480       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:52:03.495564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:52:05.636581       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:52:05.636634       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:52:16.182065       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:52:16.182210       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1209 23:52:16.221020       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1209 23:52:16.221071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [4c12f7a2107cd71062a09726bd3d76c5f44f89c88e8a790971efc9478227e5c4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 23:44:25.964014       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 23:44:25.982301       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.22"]
	E1209 23:44:25.982372       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 23:44:26.094533       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 23:44:26.094578       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 23:44:26.094610       1 server_linux.go:169] "Using iptables Proxier"
	I1209 23:44:26.101262       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 23:44:26.102877       1 server.go:483] "Version info" version="v1.31.2"
	I1209 23:44:26.102932       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 23:44:26.107311       1 config.go:199] "Starting service config controller"
	I1209 23:44:26.107333       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 23:44:26.107350       1 config.go:105] "Starting endpoint slice config controller"
	I1209 23:44:26.107354       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 23:44:26.107726       1 config.go:328] "Starting node config controller"
	I1209 23:44:26.107736       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 23:44:26.209684       1 shared_informer.go:320] Caches are synced for node config
	I1209 23:44:26.209721       1 shared_informer.go:320] Caches are synced for service config
	I1209 23:44:26.209788       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [273b5817c8ec56556f17299edd5c8b59d06c0b65fc84c875c837d9afc2dfa8cb] <==
	W1209 23:44:16.386374       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1209 23:44:16.387325       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:16.386500       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 23:44:16.387405       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:16.386599       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 23:44:16.387423       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:16.386680       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1209 23:44:16.387507       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:17.208521       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1209 23:44:17.208572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:17.350967       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1209 23:44:17.351031       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:17.450145       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1209 23:44:17.450192       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:17.458854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1209 23:44:17.458901       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:17.458977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1209 23:44:17.459005       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:17.500395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1209 23:44:17.500453       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:17.605711       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1209 23:44:17.605791       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1209 23:44:17.771064       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1209 23:44:17.771112       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1209 23:44:19.977675       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 09 23:51:18 addons-327804 kubelet[1204]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 09 23:51:18 addons-327804 kubelet[1204]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 09 23:51:18 addons-327804 kubelet[1204]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 09 23:51:19 addons-327804 kubelet[1204]: E1209 23:51:19.186141    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788279185677800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:51:19 addons-327804 kubelet[1204]: E1209 23:51:19.186205    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788279185677800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:51:29 addons-327804 kubelet[1204]: E1209 23:51:29.188959    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788289188556014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:51:29 addons-327804 kubelet[1204]: E1209 23:51:29.189346    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788289188556014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:51:39 addons-327804 kubelet[1204]: E1209 23:51:39.191871    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788299191500437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:51:39 addons-327804 kubelet[1204]: E1209 23:51:39.192008    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788299191500437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:51:49 addons-327804 kubelet[1204]: E1209 23:51:49.194575    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788309194092555,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:51:49 addons-327804 kubelet[1204]: E1209 23:51:49.194664    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788309194092555,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:51:59 addons-327804 kubelet[1204]: E1209 23:51:59.197075    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788319196651531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:51:59 addons-327804 kubelet[1204]: E1209 23:51:59.197373    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788319196651531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:52:09 addons-327804 kubelet[1204]: E1209 23:52:09.199799    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788329199343711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:52:09 addons-327804 kubelet[1204]: E1209 23:52:09.200056    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788329199343711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:52:18 addons-327804 kubelet[1204]: E1209 23:52:18.935578    1204 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 09 23:52:18 addons-327804 kubelet[1204]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 09 23:52:18 addons-327804 kubelet[1204]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 09 23:52:18 addons-327804 kubelet[1204]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 09 23:52:18 addons-327804 kubelet[1204]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 09 23:52:19 addons-327804 kubelet[1204]: E1209 23:52:19.202701    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788339202219302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:52:19 addons-327804 kubelet[1204]: E1209 23:52:19.202858    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788339202219302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:52:24 addons-327804 kubelet[1204]: I1209 23:52:24.923387    1204 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 09 23:52:29 addons-327804 kubelet[1204]: E1209 23:52:29.206358    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788349205856277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 09 23:52:29 addons-327804 kubelet[1204]: E1209 23:52:29.206401    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733788349205856277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [477d0ec756e0a69860c40ffccc988a97d8bb376ef42c03ae36978781110b8bc3] <==
	I1209 23:44:30.417277       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 23:44:30.446309       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 23:44:30.446376       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 23:44:30.467001       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 23:44:30.467126       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-327804_a0856a7e-3fbd-4790-9663-a09ea878408e!
	I1209 23:44:30.468600       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"71e33a61-5d8a-4fa0-8994-9afd2fadca64", APIVersion:"v1", ResourceVersion:"600", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-327804_a0856a7e-3fbd-4790-9663-a09ea878408e became leader
	I1209 23:44:30.567849       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-327804_a0856a7e-3fbd-4790-9663-a09ea878408e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-327804 -n addons-327804
helpers_test.go:261: (dbg) Run:  kubectl --context addons-327804 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-327804 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (364.54s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.35s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-327804
addons_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-327804: exit status 82 (2m0.459340767s)

                                                
                                                
-- stdout --
	* Stopping node "addons-327804"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:172: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-327804" : exit status 82
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-327804
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-327804: exit status 11 (21.598168989s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-327804" : exit status 11
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-327804
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-327804: exit status 11 (6.144629022s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-327804" : exit status 11
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-327804
addons_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-327804: exit status 11 (6.143024314s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.22:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:185: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-327804" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 node stop m02 -v=7 --alsologtostderr
E1210 00:10:29.785195   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:10:47.491162   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:10:50.266634   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:11:31.228458   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:12:10.560163   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-070032 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.463008304s)

                                                
                                                
-- stdout --
	* Stopping node "ha-070032-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 00:10:29.063305  102049 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:10:29.063442  102049 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:10:29.063455  102049 out.go:358] Setting ErrFile to fd 2...
	I1210 00:10:29.063462  102049 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:10:29.063702  102049 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 00:10:29.063938  102049 mustload.go:65] Loading cluster: ha-070032
	I1210 00:10:29.064303  102049 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:10:29.064320  102049 stop.go:39] StopHost: ha-070032-m02
	I1210 00:10:29.064718  102049 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:10:29.064753  102049 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:10:29.079694  102049 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42671
	I1210 00:10:29.080176  102049 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:10:29.080776  102049 main.go:141] libmachine: Using API Version  1
	I1210 00:10:29.080799  102049 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:10:29.081168  102049 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:10:29.083247  102049 out.go:177] * Stopping node "ha-070032-m02"  ...
	I1210 00:10:29.084374  102049 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1210 00:10:29.084400  102049 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:10:29.084620  102049 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1210 00:10:29.084651  102049 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:10:29.087913  102049 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:10:29.088349  102049 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:10:29.088378  102049 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:10:29.088519  102049 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:10:29.088686  102049 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:10:29.088850  102049 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:10:29.088974  102049 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa Username:docker}
	I1210 00:10:29.178999  102049 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1210 00:10:29.230765  102049 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1210 00:10:29.284398  102049 main.go:141] libmachine: Stopping "ha-070032-m02"...
	I1210 00:10:29.284424  102049 main.go:141] libmachine: (ha-070032-m02) Calling .GetState
	I1210 00:10:29.285923  102049 main.go:141] libmachine: (ha-070032-m02) Calling .Stop
	I1210 00:10:29.289431  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 0/120
	I1210 00:10:30.290928  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 1/120
	I1210 00:10:31.292254  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 2/120
	I1210 00:10:32.293766  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 3/120
	I1210 00:10:33.294946  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 4/120
	I1210 00:10:34.296396  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 5/120
	I1210 00:10:35.297680  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 6/120
	I1210 00:10:36.298965  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 7/120
	I1210 00:10:37.301021  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 8/120
	I1210 00:10:38.302636  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 9/120
	I1210 00:10:39.304687  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 10/120
	I1210 00:10:40.306915  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 11/120
	I1210 00:10:41.308484  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 12/120
	I1210 00:10:42.310008  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 13/120
	I1210 00:10:43.311455  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 14/120
	I1210 00:10:44.313502  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 15/120
	I1210 00:10:45.314810  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 16/120
	I1210 00:10:46.317294  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 17/120
	I1210 00:10:47.318585  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 18/120
	I1210 00:10:48.320430  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 19/120
	I1210 00:10:49.322463  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 20/120
	I1210 00:10:50.323861  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 21/120
	I1210 00:10:51.325122  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 22/120
	I1210 00:10:52.326688  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 23/120
	I1210 00:10:53.327953  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 24/120
	I1210 00:10:54.329611  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 25/120
	I1210 00:10:55.331130  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 26/120
	I1210 00:10:56.332464  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 27/120
	I1210 00:10:57.333938  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 28/120
	I1210 00:10:58.335326  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 29/120
	I1210 00:10:59.336821  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 30/120
	I1210 00:11:00.338213  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 31/120
	I1210 00:11:01.339658  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 32/120
	I1210 00:11:02.341033  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 33/120
	I1210 00:11:03.343172  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 34/120
	I1210 00:11:04.345272  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 35/120
	I1210 00:11:05.346701  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 36/120
	I1210 00:11:06.348174  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 37/120
	I1210 00:11:07.349786  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 38/120
	I1210 00:11:08.351457  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 39/120
	I1210 00:11:09.353542  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 40/120
	I1210 00:11:10.354954  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 41/120
	I1210 00:11:11.357057  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 42/120
	I1210 00:11:12.358499  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 43/120
	I1210 00:11:13.359809  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 44/120
	I1210 00:11:14.361623  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 45/120
	I1210 00:11:15.362897  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 46/120
	I1210 00:11:16.365097  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 47/120
	I1210 00:11:17.366386  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 48/120
	I1210 00:11:18.367819  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 49/120
	I1210 00:11:19.369942  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 50/120
	I1210 00:11:20.371165  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 51/120
	I1210 00:11:21.373013  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 52/120
	I1210 00:11:22.374900  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 53/120
	I1210 00:11:23.376986  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 54/120
	I1210 00:11:24.379005  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 55/120
	I1210 00:11:25.380938  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 56/120
	I1210 00:11:26.382153  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 57/120
	I1210 00:11:27.383546  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 58/120
	I1210 00:11:28.384813  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 59/120
	I1210 00:11:29.387006  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 60/120
	I1210 00:11:30.388325  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 61/120
	I1210 00:11:31.389538  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 62/120
	I1210 00:11:32.390964  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 63/120
	I1210 00:11:33.392188  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 64/120
	I1210 00:11:34.393764  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 65/120
	I1210 00:11:35.395012  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 66/120
	I1210 00:11:36.396347  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 67/120
	I1210 00:11:37.397696  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 68/120
	I1210 00:11:38.399024  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 69/120
	I1210 00:11:39.401011  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 70/120
	I1210 00:11:40.403179  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 71/120
	I1210 00:11:41.405315  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 72/120
	I1210 00:11:42.407524  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 73/120
	I1210 00:11:43.408637  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 74/120
	I1210 00:11:44.410406  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 75/120
	I1210 00:11:45.412151  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 76/120
	I1210 00:11:46.413279  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 77/120
	I1210 00:11:47.414489  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 78/120
	I1210 00:11:48.415749  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 79/120
	I1210 00:11:49.417057  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 80/120
	I1210 00:11:50.418354  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 81/120
	I1210 00:11:51.419940  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 82/120
	I1210 00:11:52.421167  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 83/120
	I1210 00:11:53.423017  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 84/120
	I1210 00:11:54.424415  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 85/120
	I1210 00:11:55.425838  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 86/120
	I1210 00:11:56.427096  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 87/120
	I1210 00:11:57.428501  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 88/120
	I1210 00:11:58.429765  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 89/120
	I1210 00:11:59.431739  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 90/120
	I1210 00:12:00.433069  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 91/120
	I1210 00:12:01.434419  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 92/120
	I1210 00:12:02.435753  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 93/120
	I1210 00:12:03.437469  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 94/120
	I1210 00:12:04.439638  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 95/120
	I1210 00:12:05.441563  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 96/120
	I1210 00:12:06.442866  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 97/120
	I1210 00:12:07.444974  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 98/120
	I1210 00:12:08.446336  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 99/120
	I1210 00:12:09.448385  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 100/120
	I1210 00:12:10.449752  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 101/120
	I1210 00:12:11.451098  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 102/120
	I1210 00:12:12.453121  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 103/120
	I1210 00:12:13.454518  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 104/120
	I1210 00:12:14.456918  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 105/120
	I1210 00:12:15.458271  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 106/120
	I1210 00:12:16.459720  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 107/120
	I1210 00:12:17.461011  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 108/120
	I1210 00:12:18.462930  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 109/120
	I1210 00:12:19.464882  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 110/120
	I1210 00:12:20.466278  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 111/120
	I1210 00:12:21.467548  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 112/120
	I1210 00:12:22.469183  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 113/120
	I1210 00:12:23.470493  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 114/120
	I1210 00:12:24.472451  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 115/120
	I1210 00:12:25.473715  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 116/120
	I1210 00:12:26.475080  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 117/120
	I1210 00:12:27.477174  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 118/120
	I1210 00:12:28.478514  102049 main.go:141] libmachine: (ha-070032-m02) Waiting for machine to stop 119/120
	I1210 00:12:29.479181  102049 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1210 00:12:29.479351  102049 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-070032 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-070032 status -v=7 --alsologtostderr: (18.746938392s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-070032 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-070032 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-070032 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-070032 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-070032 -n ha-070032
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-070032 logs -n 25: (1.334463074s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-070032 cp ha-070032-m03:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3617736836/001/cp-test_ha-070032-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m03:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032:/home/docker/cp-test_ha-070032-m03_ha-070032.txt                       |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032 sudo cat                                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m03_ha-070032.txt                                 |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m03:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m02:/home/docker/cp-test_ha-070032-m03_ha-070032-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032-m02 sudo cat                                          | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m03_ha-070032-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m03:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04:/home/docker/cp-test_ha-070032-m03_ha-070032-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032-m04 sudo cat                                          | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m03_ha-070032-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-070032 cp testdata/cp-test.txt                                                | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3617736836/001/cp-test_ha-070032-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032:/home/docker/cp-test_ha-070032-m04_ha-070032.txt                       |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032 sudo cat                                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m04_ha-070032.txt                                 |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m02:/home/docker/cp-test_ha-070032-m04_ha-070032-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032-m02 sudo cat                                          | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m04_ha-070032-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m03:/home/docker/cp-test_ha-070032-m04_ha-070032-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032-m03 sudo cat                                          | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m04_ha-070032-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-070032 node stop m02 -v=7                                                     | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/10 00:05:52
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 00:05:52.791526   97943 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:05:52.791657   97943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:05:52.791669   97943 out.go:358] Setting ErrFile to fd 2...
	I1210 00:05:52.791677   97943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:05:52.791857   97943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 00:05:52.792405   97943 out.go:352] Setting JSON to false
	I1210 00:05:52.793229   97943 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6504,"bootTime":1733782649,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:05:52.793329   97943 start.go:139] virtualization: kvm guest
	I1210 00:05:52.796124   97943 out.go:177] * [ha-070032] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 00:05:52.797192   97943 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 00:05:52.797225   97943 notify.go:220] Checking for updates...
	I1210 00:05:52.799407   97943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:05:52.800504   97943 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:05:52.801675   97943 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:05:52.802744   97943 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:05:52.803783   97943 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:05:52.805109   97943 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:05:52.839813   97943 out.go:177] * Using the kvm2 driver based on user configuration
	I1210 00:05:52.840958   97943 start.go:297] selected driver: kvm2
	I1210 00:05:52.841009   97943 start.go:901] validating driver "kvm2" against <nil>
	I1210 00:05:52.841037   97943 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:05:52.841764   97943 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:05:52.841862   97943 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 00:05:52.856053   97943 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1210 00:05:52.856105   97943 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1210 00:05:52.856343   97943 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:05:52.856388   97943 cni.go:84] Creating CNI manager for ""
	I1210 00:05:52.856439   97943 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1210 00:05:52.856451   97943 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 00:05:52.856513   97943 start.go:340] cluster config:
	{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1210 00:05:52.856629   97943 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:05:52.858290   97943 out.go:177] * Starting "ha-070032" primary control-plane node in "ha-070032" cluster
	I1210 00:05:52.859441   97943 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:05:52.859486   97943 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1210 00:05:52.859496   97943 cache.go:56] Caching tarball of preloaded images
	I1210 00:05:52.859571   97943 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 00:05:52.859584   97943 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 00:05:52.859883   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:05:52.859904   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json: {Name:mke01e2b75d6b946a14cfa49d40b8237b928645a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:05:52.860050   97943 start.go:360] acquireMachinesLock for ha-070032: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:05:52.860091   97943 start.go:364] duration metric: took 24.816µs to acquireMachinesLock for "ha-070032"
	I1210 00:05:52.860115   97943 start.go:93] Provisioning new machine with config: &{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:05:52.860185   97943 start.go:125] createHost starting for "" (driver="kvm2")
	I1210 00:05:52.862431   97943 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1210 00:05:52.862625   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:05:52.862674   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:52.876494   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44133
	I1210 00:05:52.876866   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:52.877406   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:05:52.877428   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:52.877772   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:52.877940   97943 main.go:141] libmachine: (ha-070032) Calling .GetMachineName
	I1210 00:05:52.878106   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:05:52.878243   97943 start.go:159] libmachine.API.Create for "ha-070032" (driver="kvm2")
	I1210 00:05:52.878282   97943 client.go:168] LocalClient.Create starting
	I1210 00:05:52.878351   97943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem
	I1210 00:05:52.878400   97943 main.go:141] libmachine: Decoding PEM data...
	I1210 00:05:52.878419   97943 main.go:141] libmachine: Parsing certificate...
	I1210 00:05:52.878472   97943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem
	I1210 00:05:52.878494   97943 main.go:141] libmachine: Decoding PEM data...
	I1210 00:05:52.878509   97943 main.go:141] libmachine: Parsing certificate...
	I1210 00:05:52.878535   97943 main.go:141] libmachine: Running pre-create checks...
	I1210 00:05:52.878545   97943 main.go:141] libmachine: (ha-070032) Calling .PreCreateCheck
	I1210 00:05:52.878920   97943 main.go:141] libmachine: (ha-070032) Calling .GetConfigRaw
	I1210 00:05:52.879333   97943 main.go:141] libmachine: Creating machine...
	I1210 00:05:52.879348   97943 main.go:141] libmachine: (ha-070032) Calling .Create
	I1210 00:05:52.879474   97943 main.go:141] libmachine: (ha-070032) Creating KVM machine...
	I1210 00:05:52.880541   97943 main.go:141] libmachine: (ha-070032) DBG | found existing default KVM network
	I1210 00:05:52.881177   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:52.881049   97966 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a30}
	I1210 00:05:52.881198   97943 main.go:141] libmachine: (ha-070032) DBG | created network xml: 
	I1210 00:05:52.881212   97943 main.go:141] libmachine: (ha-070032) DBG | <network>
	I1210 00:05:52.881222   97943 main.go:141] libmachine: (ha-070032) DBG |   <name>mk-ha-070032</name>
	I1210 00:05:52.881231   97943 main.go:141] libmachine: (ha-070032) DBG |   <dns enable='no'/>
	I1210 00:05:52.881237   97943 main.go:141] libmachine: (ha-070032) DBG |   
	I1210 00:05:52.881250   97943 main.go:141] libmachine: (ha-070032) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1210 00:05:52.881265   97943 main.go:141] libmachine: (ha-070032) DBG |     <dhcp>
	I1210 00:05:52.881279   97943 main.go:141] libmachine: (ha-070032) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1210 00:05:52.881290   97943 main.go:141] libmachine: (ha-070032) DBG |     </dhcp>
	I1210 00:05:52.881301   97943 main.go:141] libmachine: (ha-070032) DBG |   </ip>
	I1210 00:05:52.881310   97943 main.go:141] libmachine: (ha-070032) DBG |   
	I1210 00:05:52.881318   97943 main.go:141] libmachine: (ha-070032) DBG | </network>
	I1210 00:05:52.881328   97943 main.go:141] libmachine: (ha-070032) DBG | 
	I1210 00:05:52.886258   97943 main.go:141] libmachine: (ha-070032) DBG | trying to create private KVM network mk-ha-070032 192.168.39.0/24...
	I1210 00:05:52.950347   97943 main.go:141] libmachine: (ha-070032) Setting up store path in /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032 ...
	I1210 00:05:52.950384   97943 main.go:141] libmachine: (ha-070032) DBG | private KVM network mk-ha-070032 192.168.39.0/24 created
	I1210 00:05:52.950396   97943 main.go:141] libmachine: (ha-070032) Building disk image from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1210 00:05:52.950439   97943 main.go:141] libmachine: (ha-070032) Downloading /home/jenkins/minikube-integration/20062-79135/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1210 00:05:52.950463   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:52.950265   97966 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:05:53.225909   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:53.225784   97966 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa...
	I1210 00:05:53.325235   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:53.325112   97966 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/ha-070032.rawdisk...
	I1210 00:05:53.325266   97943 main.go:141] libmachine: (ha-070032) DBG | Writing magic tar header
	I1210 00:05:53.325288   97943 main.go:141] libmachine: (ha-070032) DBG | Writing SSH key tar header
	I1210 00:05:53.325300   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:53.325244   97966 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032 ...
	I1210 00:05:53.325369   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032
	I1210 00:05:53.325394   97943 main.go:141] libmachine: (ha-070032) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032 (perms=drwx------)
	I1210 00:05:53.325428   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines
	I1210 00:05:53.325447   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:05:53.325560   97943 main.go:141] libmachine: (ha-070032) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines (perms=drwxr-xr-x)
	I1210 00:05:53.325599   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135
	I1210 00:05:53.325634   97943 main.go:141] libmachine: (ha-070032) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube (perms=drwxr-xr-x)
	I1210 00:05:53.325659   97943 main.go:141] libmachine: (ha-070032) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135 (perms=drwxrwxr-x)
	I1210 00:05:53.325669   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1210 00:05:53.325681   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home/jenkins
	I1210 00:05:53.325695   97943 main.go:141] libmachine: (ha-070032) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 00:05:53.325703   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home
	I1210 00:05:53.325715   97943 main.go:141] libmachine: (ha-070032) DBG | Skipping /home - not owner
	I1210 00:05:53.325747   97943 main.go:141] libmachine: (ha-070032) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 00:05:53.325763   97943 main.go:141] libmachine: (ha-070032) Creating domain...
	I1210 00:05:53.326682   97943 main.go:141] libmachine: (ha-070032) define libvirt domain using xml: 
	I1210 00:05:53.326699   97943 main.go:141] libmachine: (ha-070032) <domain type='kvm'>
	I1210 00:05:53.326705   97943 main.go:141] libmachine: (ha-070032)   <name>ha-070032</name>
	I1210 00:05:53.326709   97943 main.go:141] libmachine: (ha-070032)   <memory unit='MiB'>2200</memory>
	I1210 00:05:53.326714   97943 main.go:141] libmachine: (ha-070032)   <vcpu>2</vcpu>
	I1210 00:05:53.326718   97943 main.go:141] libmachine: (ha-070032)   <features>
	I1210 00:05:53.326744   97943 main.go:141] libmachine: (ha-070032)     <acpi/>
	I1210 00:05:53.326772   97943 main.go:141] libmachine: (ha-070032)     <apic/>
	I1210 00:05:53.326783   97943 main.go:141] libmachine: (ha-070032)     <pae/>
	I1210 00:05:53.326806   97943 main.go:141] libmachine: (ha-070032)     
	I1210 00:05:53.326826   97943 main.go:141] libmachine: (ha-070032)   </features>
	I1210 00:05:53.326854   97943 main.go:141] libmachine: (ha-070032)   <cpu mode='host-passthrough'>
	I1210 00:05:53.326865   97943 main.go:141] libmachine: (ha-070032)   
	I1210 00:05:53.326872   97943 main.go:141] libmachine: (ha-070032)   </cpu>
	I1210 00:05:53.326882   97943 main.go:141] libmachine: (ha-070032)   <os>
	I1210 00:05:53.326889   97943 main.go:141] libmachine: (ha-070032)     <type>hvm</type>
	I1210 00:05:53.326900   97943 main.go:141] libmachine: (ha-070032)     <boot dev='cdrom'/>
	I1210 00:05:53.326906   97943 main.go:141] libmachine: (ha-070032)     <boot dev='hd'/>
	I1210 00:05:53.326920   97943 main.go:141] libmachine: (ha-070032)     <bootmenu enable='no'/>
	I1210 00:05:53.326944   97943 main.go:141] libmachine: (ha-070032)   </os>
	I1210 00:05:53.326956   97943 main.go:141] libmachine: (ha-070032)   <devices>
	I1210 00:05:53.326966   97943 main.go:141] libmachine: (ha-070032)     <disk type='file' device='cdrom'>
	I1210 00:05:53.326982   97943 main.go:141] libmachine: (ha-070032)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/boot2docker.iso'/>
	I1210 00:05:53.326995   97943 main.go:141] libmachine: (ha-070032)       <target dev='hdc' bus='scsi'/>
	I1210 00:05:53.327012   97943 main.go:141] libmachine: (ha-070032)       <readonly/>
	I1210 00:05:53.327027   97943 main.go:141] libmachine: (ha-070032)     </disk>
	I1210 00:05:53.327039   97943 main.go:141] libmachine: (ha-070032)     <disk type='file' device='disk'>
	I1210 00:05:53.327051   97943 main.go:141] libmachine: (ha-070032)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1210 00:05:53.327066   97943 main.go:141] libmachine: (ha-070032)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/ha-070032.rawdisk'/>
	I1210 00:05:53.327074   97943 main.go:141] libmachine: (ha-070032)       <target dev='hda' bus='virtio'/>
	I1210 00:05:53.327080   97943 main.go:141] libmachine: (ha-070032)     </disk>
	I1210 00:05:53.327086   97943 main.go:141] libmachine: (ha-070032)     <interface type='network'>
	I1210 00:05:53.327091   97943 main.go:141] libmachine: (ha-070032)       <source network='mk-ha-070032'/>
	I1210 00:05:53.327096   97943 main.go:141] libmachine: (ha-070032)       <model type='virtio'/>
	I1210 00:05:53.327101   97943 main.go:141] libmachine: (ha-070032)     </interface>
	I1210 00:05:53.327107   97943 main.go:141] libmachine: (ha-070032)     <interface type='network'>
	I1210 00:05:53.327127   97943 main.go:141] libmachine: (ha-070032)       <source network='default'/>
	I1210 00:05:53.327131   97943 main.go:141] libmachine: (ha-070032)       <model type='virtio'/>
	I1210 00:05:53.327138   97943 main.go:141] libmachine: (ha-070032)     </interface>
	I1210 00:05:53.327142   97943 main.go:141] libmachine: (ha-070032)     <serial type='pty'>
	I1210 00:05:53.327147   97943 main.go:141] libmachine: (ha-070032)       <target port='0'/>
	I1210 00:05:53.327152   97943 main.go:141] libmachine: (ha-070032)     </serial>
	I1210 00:05:53.327157   97943 main.go:141] libmachine: (ha-070032)     <console type='pty'>
	I1210 00:05:53.327167   97943 main.go:141] libmachine: (ha-070032)       <target type='serial' port='0'/>
	I1210 00:05:53.327176   97943 main.go:141] libmachine: (ha-070032)     </console>
	I1210 00:05:53.327183   97943 main.go:141] libmachine: (ha-070032)     <rng model='virtio'>
	I1210 00:05:53.327188   97943 main.go:141] libmachine: (ha-070032)       <backend model='random'>/dev/random</backend>
	I1210 00:05:53.327201   97943 main.go:141] libmachine: (ha-070032)     </rng>
	I1210 00:05:53.327208   97943 main.go:141] libmachine: (ha-070032)     
	I1210 00:05:53.327212   97943 main.go:141] libmachine: (ha-070032)     
	I1210 00:05:53.327219   97943 main.go:141] libmachine: (ha-070032)   </devices>
	I1210 00:05:53.327223   97943 main.go:141] libmachine: (ha-070032) </domain>
	I1210 00:05:53.327229   97943 main.go:141] libmachine: (ha-070032) 
	I1210 00:05:53.331717   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:3e:64:27 in network default
	I1210 00:05:53.332300   97943 main.go:141] libmachine: (ha-070032) Ensuring networks are active...
	I1210 00:05:53.332321   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:53.332935   97943 main.go:141] libmachine: (ha-070032) Ensuring network default is active
	I1210 00:05:53.333268   97943 main.go:141] libmachine: (ha-070032) Ensuring network mk-ha-070032 is active
	I1210 00:05:53.333775   97943 main.go:141] libmachine: (ha-070032) Getting domain xml...
	I1210 00:05:53.334418   97943 main.go:141] libmachine: (ha-070032) Creating domain...
	I1210 00:05:54.486671   97943 main.go:141] libmachine: (ha-070032) Waiting to get IP...
	I1210 00:05:54.487631   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:54.488004   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:54.488023   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:54.487962   97966 retry.go:31] will retry after 250.94638ms: waiting for machine to come up
	I1210 00:05:54.740488   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:54.740898   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:54.740922   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:54.740853   97966 retry.go:31] will retry after 369.652496ms: waiting for machine to come up
	I1210 00:05:55.112670   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:55.113058   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:55.113088   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:55.113006   97966 retry.go:31] will retry after 419.563235ms: waiting for machine to come up
	I1210 00:05:55.534593   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:55.535015   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:55.535042   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:55.534960   97966 retry.go:31] will retry after 426.548067ms: waiting for machine to come up
	I1210 00:05:55.963569   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:55.963962   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:55.963978   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:55.963937   97966 retry.go:31] will retry after 617.965427ms: waiting for machine to come up
	I1210 00:05:56.583725   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:56.584072   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:56.584105   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:56.584063   97966 retry.go:31] will retry after 856.526353ms: waiting for machine to come up
	I1210 00:05:57.442311   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:57.442739   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:57.442796   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:57.442703   97966 retry.go:31] will retry after 1.178569719s: waiting for machine to come up
	I1210 00:05:58.622338   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:58.622797   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:58.622827   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:58.622728   97966 retry.go:31] will retry after 1.42624777s: waiting for machine to come up
	I1210 00:06:00.051240   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:00.051614   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:06:00.051640   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:06:00.051572   97966 retry.go:31] will retry after 1.801666778s: waiting for machine to come up
	I1210 00:06:01.855728   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:01.856159   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:06:01.856181   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:06:01.856123   97966 retry.go:31] will retry after 2.078837624s: waiting for machine to come up
	I1210 00:06:03.936907   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:03.937387   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:06:03.937421   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:06:03.937345   97966 retry.go:31] will retry after 2.395168214s: waiting for machine to come up
	I1210 00:06:06.336012   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:06.336380   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:06:06.336409   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:06:06.336336   97966 retry.go:31] will retry after 2.386978523s: waiting for machine to come up
	I1210 00:06:08.725386   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:08.725781   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:06:08.725809   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:06:08.725749   97966 retry.go:31] will retry after 4.346211813s: waiting for machine to come up
	I1210 00:06:13.073905   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:13.074439   97943 main.go:141] libmachine: (ha-070032) Found IP for machine: 192.168.39.187
	I1210 00:06:13.074469   97943 main.go:141] libmachine: (ha-070032) Reserving static IP address...
	I1210 00:06:13.074487   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has current primary IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:13.075078   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find host DHCP lease matching {name: "ha-070032", mac: "52:54:00:ad:ce:dc", ip: "192.168.39.187"} in network mk-ha-070032
	I1210 00:06:13.145743   97943 main.go:141] libmachine: (ha-070032) DBG | Getting to WaitForSSH function...
	I1210 00:06:13.145776   97943 main.go:141] libmachine: (ha-070032) Reserved static IP address: 192.168.39.187
	I1210 00:06:13.145818   97943 main.go:141] libmachine: (ha-070032) Waiting for SSH to be available...
	I1210 00:06:13.148440   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:13.148825   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032
	I1210 00:06:13.148851   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find defined IP address of network mk-ha-070032 interface with MAC address 52:54:00:ad:ce:dc
	I1210 00:06:13.149012   97943 main.go:141] libmachine: (ha-070032) DBG | Using SSH client type: external
	I1210 00:06:13.149039   97943 main.go:141] libmachine: (ha-070032) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa (-rw-------)
	I1210 00:06:13.149072   97943 main.go:141] libmachine: (ha-070032) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:06:13.149085   97943 main.go:141] libmachine: (ha-070032) DBG | About to run SSH command:
	I1210 00:06:13.149097   97943 main.go:141] libmachine: (ha-070032) DBG | exit 0
	I1210 00:06:13.152933   97943 main.go:141] libmachine: (ha-070032) DBG | SSH cmd err, output: exit status 255: 
	I1210 00:06:13.152951   97943 main.go:141] libmachine: (ha-070032) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1210 00:06:13.152957   97943 main.go:141] libmachine: (ha-070032) DBG | command : exit 0
	I1210 00:06:13.152962   97943 main.go:141] libmachine: (ha-070032) DBG | err     : exit status 255
	I1210 00:06:13.152969   97943 main.go:141] libmachine: (ha-070032) DBG | output  : 
	I1210 00:06:16.155027   97943 main.go:141] libmachine: (ha-070032) DBG | Getting to WaitForSSH function...
	I1210 00:06:16.157296   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.157685   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.157714   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.157840   97943 main.go:141] libmachine: (ha-070032) DBG | Using SSH client type: external
	I1210 00:06:16.157860   97943 main.go:141] libmachine: (ha-070032) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa (-rw-------)
	I1210 00:06:16.157887   97943 main.go:141] libmachine: (ha-070032) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.187 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:06:16.157900   97943 main.go:141] libmachine: (ha-070032) DBG | About to run SSH command:
	I1210 00:06:16.157909   97943 main.go:141] libmachine: (ha-070032) DBG | exit 0
	I1210 00:06:16.278179   97943 main.go:141] libmachine: (ha-070032) DBG | SSH cmd err, output: <nil>: 
	I1210 00:06:16.278456   97943 main.go:141] libmachine: (ha-070032) KVM machine creation complete!
	I1210 00:06:16.278762   97943 main.go:141] libmachine: (ha-070032) Calling .GetConfigRaw
	I1210 00:06:16.279308   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:16.279502   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:16.279643   97943 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1210 00:06:16.279659   97943 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:06:16.280933   97943 main.go:141] libmachine: Detecting operating system of created instance...
	I1210 00:06:16.280956   97943 main.go:141] libmachine: Waiting for SSH to be available...
	I1210 00:06:16.280962   97943 main.go:141] libmachine: Getting to WaitForSSH function...
	I1210 00:06:16.280968   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:16.283215   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.283661   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.283689   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.283820   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:16.283997   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.284144   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.284266   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:16.284430   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:06:16.284659   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:06:16.284672   97943 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1210 00:06:16.381723   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:06:16.381748   97943 main.go:141] libmachine: Detecting the provisioner...
	I1210 00:06:16.381756   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:16.384507   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.384824   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.384850   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.384978   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:16.385166   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.385349   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.385493   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:16.385645   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:06:16.385854   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:06:16.385866   97943 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1210 00:06:16.482791   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1210 00:06:16.482875   97943 main.go:141] libmachine: found compatible host: buildroot
	I1210 00:06:16.482890   97943 main.go:141] libmachine: Provisioning with buildroot...
	I1210 00:06:16.482898   97943 main.go:141] libmachine: (ha-070032) Calling .GetMachineName
	I1210 00:06:16.483155   97943 buildroot.go:166] provisioning hostname "ha-070032"
	I1210 00:06:16.483181   97943 main.go:141] libmachine: (ha-070032) Calling .GetMachineName
	I1210 00:06:16.483360   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:16.485848   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.486193   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.486234   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.486327   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:16.486524   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.486696   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.486841   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:16.486993   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:06:16.487168   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:06:16.487182   97943 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-070032 && echo "ha-070032" | sudo tee /etc/hostname
	I1210 00:06:16.599563   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-070032
	
	I1210 00:06:16.599595   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:16.602261   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.602629   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.602659   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.602789   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:16.603020   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.603241   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.603430   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:16.603599   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:06:16.603761   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:06:16.603781   97943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-070032' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-070032/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-070032' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 00:06:16.710380   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:06:16.710422   97943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 00:06:16.710472   97943 buildroot.go:174] setting up certificates
	I1210 00:06:16.710489   97943 provision.go:84] configureAuth start
	I1210 00:06:16.710503   97943 main.go:141] libmachine: (ha-070032) Calling .GetMachineName
	I1210 00:06:16.710783   97943 main.go:141] libmachine: (ha-070032) Calling .GetIP
	I1210 00:06:16.713296   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.713682   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.713712   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.713807   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:16.716284   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.716639   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.716657   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.716807   97943 provision.go:143] copyHostCerts
	I1210 00:06:16.716848   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:06:16.716882   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 00:06:16.716898   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:06:16.716962   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 00:06:16.717048   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:06:16.717075   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 00:06:16.717082   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:06:16.717107   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 00:06:16.717158   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:06:16.717175   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 00:06:16.717181   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:06:16.717202   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 00:06:16.717253   97943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.ha-070032 san=[127.0.0.1 192.168.39.187 ha-070032 localhost minikube]
	I1210 00:06:16.857455   97943 provision.go:177] copyRemoteCerts
	I1210 00:06:16.857514   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 00:06:16.857542   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:16.860287   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.860660   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.860687   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.860918   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:16.861136   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.861318   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:16.861436   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:06:16.940074   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 00:06:16.940147   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 00:06:16.961938   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 00:06:16.962011   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1210 00:06:16.982947   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 00:06:16.983027   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 00:06:17.003600   97943 provision.go:87] duration metric: took 293.095287ms to configureAuth
	I1210 00:06:17.003631   97943 buildroot.go:189] setting minikube options for container-runtime
	I1210 00:06:17.003823   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:06:17.003908   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:17.006244   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.006580   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.006608   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.006735   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:17.006932   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.007076   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.007191   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:17.007315   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:06:17.007484   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:06:17.007502   97943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 00:06:17.211708   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 00:06:17.211741   97943 main.go:141] libmachine: Checking connection to Docker...
	I1210 00:06:17.211753   97943 main.go:141] libmachine: (ha-070032) Calling .GetURL
	I1210 00:06:17.212951   97943 main.go:141] libmachine: (ha-070032) DBG | Using libvirt version 6000000
	I1210 00:06:17.215245   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.215611   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.215644   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.215769   97943 main.go:141] libmachine: Docker is up and running!
	I1210 00:06:17.215785   97943 main.go:141] libmachine: Reticulating splines...
	I1210 00:06:17.215796   97943 client.go:171] duration metric: took 24.337498941s to LocalClient.Create
	I1210 00:06:17.215826   97943 start.go:167] duration metric: took 24.337582238s to libmachine.API.Create "ha-070032"
	I1210 00:06:17.215839   97943 start.go:293] postStartSetup for "ha-070032" (driver="kvm2")
	I1210 00:06:17.215862   97943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 00:06:17.215886   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:17.216149   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 00:06:17.216177   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:17.218250   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.218590   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.218632   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.218752   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:17.218921   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.219062   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:17.219188   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:06:17.296211   97943 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 00:06:17.300251   97943 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 00:06:17.300276   97943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 00:06:17.300345   97943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 00:06:17.300421   97943 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 00:06:17.300431   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /etc/ssl/certs/862962.pem
	I1210 00:06:17.300529   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 00:06:17.308961   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:06:17.331496   97943 start.go:296] duration metric: took 115.636437ms for postStartSetup
	I1210 00:06:17.331591   97943 main.go:141] libmachine: (ha-070032) Calling .GetConfigRaw
	I1210 00:06:17.332201   97943 main.go:141] libmachine: (ha-070032) Calling .GetIP
	I1210 00:06:17.335151   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.335527   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.335569   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.335747   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:06:17.335921   97943 start.go:128] duration metric: took 24.475725142s to createHost
	I1210 00:06:17.335945   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:17.338044   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.338384   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.338412   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.338541   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:17.338741   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.338882   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.339001   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:17.339163   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:06:17.339337   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:06:17.339348   97943 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 00:06:17.439329   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733789177.417194070
	
	I1210 00:06:17.439361   97943 fix.go:216] guest clock: 1733789177.417194070
	I1210 00:06:17.439372   97943 fix.go:229] Guest: 2024-12-10 00:06:17.41719407 +0000 UTC Remote: 2024-12-10 00:06:17.335933593 +0000 UTC m=+24.582014233 (delta=81.260477ms)
	I1210 00:06:17.439408   97943 fix.go:200] guest clock delta is within tolerance: 81.260477ms
	I1210 00:06:17.439416   97943 start.go:83] releasing machines lock for "ha-070032", held for 24.579311872s
	I1210 00:06:17.439440   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:17.439778   97943 main.go:141] libmachine: (ha-070032) Calling .GetIP
	I1210 00:06:17.442802   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.443261   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.443289   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.443497   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:17.444002   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:17.444206   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:17.444324   97943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 00:06:17.444401   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:17.444474   97943 ssh_runner.go:195] Run: cat /version.json
	I1210 00:06:17.444500   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:17.446933   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.447294   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.447320   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.447352   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.447499   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:17.447688   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.447744   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.447772   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.447844   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:17.447953   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:17.448103   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.448103   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:06:17.448278   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:17.448402   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:06:17.553500   97943 ssh_runner.go:195] Run: systemctl --version
	I1210 00:06:17.559183   97943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 00:06:17.714099   97943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 00:06:17.720445   97943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 00:06:17.720522   97943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 00:06:17.735693   97943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 00:06:17.735715   97943 start.go:495] detecting cgroup driver to use...
	I1210 00:06:17.735777   97943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 00:06:17.750781   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 00:06:17.763333   97943 docker.go:217] disabling cri-docker service (if available) ...
	I1210 00:06:17.763379   97943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 00:06:17.775483   97943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 00:06:17.787288   97943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 00:06:17.890184   97943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 00:06:18.028147   97943 docker.go:233] disabling docker service ...
	I1210 00:06:18.028234   97943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 00:06:18.041611   97943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 00:06:18.054485   97943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 00:06:18.194456   97943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 00:06:18.314202   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 00:06:18.327181   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 00:06:18.343918   97943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 00:06:18.343989   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.353427   97943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 00:06:18.353489   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.362859   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.371991   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.381017   97943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 00:06:18.391381   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.401252   97943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.416290   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.426233   97943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 00:06:18.435267   97943 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 00:06:18.435316   97943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 00:06:18.447946   97943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 00:06:18.456951   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:06:18.573205   97943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 00:06:18.656643   97943 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 00:06:18.656726   97943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 00:06:18.661011   97943 start.go:563] Will wait 60s for crictl version
	I1210 00:06:18.661071   97943 ssh_runner.go:195] Run: which crictl
	I1210 00:06:18.664478   97943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:06:18.701494   97943 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 00:06:18.701578   97943 ssh_runner.go:195] Run: crio --version
	I1210 00:06:18.727238   97943 ssh_runner.go:195] Run: crio --version
	I1210 00:06:18.753327   97943 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 00:06:18.754595   97943 main.go:141] libmachine: (ha-070032) Calling .GetIP
	I1210 00:06:18.756947   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:18.757200   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:18.757235   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:18.757445   97943 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 00:06:18.760940   97943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:06:18.772727   97943 kubeadm.go:883] updating cluster {Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 00:06:18.772828   97943 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:06:18.772879   97943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:06:18.804204   97943 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 00:06:18.804265   97943 ssh_runner.go:195] Run: which lz4
	I1210 00:06:18.807579   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1210 00:06:18.807670   97943 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 00:06:18.811358   97943 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 00:06:18.811386   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1210 00:06:19.965583   97943 crio.go:462] duration metric: took 1.157944737s to copy over tarball
	I1210 00:06:19.965660   97943 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 00:06:21.934864   97943 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.969164039s)
	I1210 00:06:21.934896   97943 crio.go:469] duration metric: took 1.969285734s to extract the tarball
	I1210 00:06:21.934906   97943 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 00:06:21.970025   97943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:06:22.022669   97943 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 00:06:22.022692   97943 cache_images.go:84] Images are preloaded, skipping loading
	I1210 00:06:22.022702   97943 kubeadm.go:934] updating node { 192.168.39.187 8443 v1.31.2 crio true true} ...
	I1210 00:06:22.022843   97943 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-070032 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:06:22.022948   97943 ssh_runner.go:195] Run: crio config
	I1210 00:06:22.066130   97943 cni.go:84] Creating CNI manager for ""
	I1210 00:06:22.066152   97943 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1210 00:06:22.066160   97943 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 00:06:22.066182   97943 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.187 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-070032 NodeName:ha-070032 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 00:06:22.066308   97943 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.187
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-070032"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.187"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.187"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 00:06:22.066339   97943 kube-vip.go:115] generating kube-vip config ...
	I1210 00:06:22.066403   97943 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1210 00:06:22.080860   97943 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1210 00:06:22.080973   97943 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1210 00:06:22.081051   97943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:06:22.089866   97943 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 00:06:22.089923   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1210 00:06:22.098290   97943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1210 00:06:22.112742   97943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:06:22.127069   97943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1210 00:06:22.141317   97943 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1210 00:06:22.155689   97943 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1210 00:06:22.159003   97943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:06:22.169321   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:06:22.288035   97943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:06:22.303534   97943 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032 for IP: 192.168.39.187
	I1210 00:06:22.303559   97943 certs.go:194] generating shared ca certs ...
	I1210 00:06:22.303580   97943 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:22.303764   97943 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 00:06:22.303807   97943 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 00:06:22.303816   97943 certs.go:256] generating profile certs ...
	I1210 00:06:22.303867   97943 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key
	I1210 00:06:22.303881   97943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.crt with IP's: []
	I1210 00:06:22.579094   97943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.crt ...
	I1210 00:06:22.579127   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.crt: {Name:mk6da1df398501169ebaa4be6e0991a8cdf439ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:22.579330   97943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key ...
	I1210 00:06:22.579344   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key: {Name:mkcfad0deb7a44a0416ffc9ec52ed32ba5314a9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:22.579449   97943 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.e24980b8
	I1210 00:06:22.579465   97943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.e24980b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.187 192.168.39.254]
	I1210 00:06:22.676685   97943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.e24980b8 ...
	I1210 00:06:22.676712   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.e24980b8: {Name:mke16dbfb98e7219f2bbc6176b557aae983cf59d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:22.676895   97943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.e24980b8 ...
	I1210 00:06:22.676911   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.e24980b8: {Name:mke38a755e8856925c614e9671ffbd341e4bacfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:22.677005   97943 certs.go:381] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.e24980b8 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt
	I1210 00:06:22.677102   97943 certs.go:385] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.e24980b8 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key
	I1210 00:06:22.677175   97943 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key
	I1210 00:06:22.677191   97943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt with IP's: []
	I1210 00:06:23.248653   97943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt ...
	I1210 00:06:23.248694   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt: {Name:mk109f5f541d0487f6eee37e10618be0687d2257 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:23.248940   97943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key ...
	I1210 00:06:23.248958   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key: {Name:mkb6a55c3dbe59a4c5c10d115460729fd5017c90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:23.249084   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 00:06:23.249122   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 00:06:23.249145   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 00:06:23.249169   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 00:06:23.249185   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 00:06:23.249208   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 00:06:23.249231   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 00:06:23.249252   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 00:06:23.249332   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 00:06:23.249393   97943 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 00:06:23.249407   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:06:23.249449   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 00:06:23.249487   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:06:23.249528   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 00:06:23.249593   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:06:23.249643   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:06:23.249668   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem -> /usr/share/ca-certificates/86296.pem
	I1210 00:06:23.249692   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /usr/share/ca-certificates/862962.pem
	I1210 00:06:23.250316   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:06:23.282882   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 00:06:23.307116   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:06:23.329842   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:06:23.350860   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 00:06:23.371360   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 00:06:23.391801   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:06:23.412467   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:06:23.433690   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:06:23.454439   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 00:06:23.475132   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 00:06:23.495728   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 00:06:23.510105   97943 ssh_runner.go:195] Run: openssl version
	I1210 00:06:23.515363   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:06:23.524990   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:06:23.528859   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:06:23.528911   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:06:23.534177   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:06:23.544011   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 00:06:23.554049   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 00:06:23.558290   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 00:06:23.558341   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 00:06:23.563770   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 00:06:23.574235   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 00:06:23.584591   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 00:06:23.588826   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 00:06:23.588880   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 00:06:23.594177   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:06:23.604355   97943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:06:23.608126   97943 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 00:06:23.608176   97943 kubeadm.go:392] StartCluster: {Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:06:23.608256   97943 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 00:06:23.608313   97943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:06:23.644503   97943 cri.go:89] found id: ""
	I1210 00:06:23.644571   97943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 00:06:23.653924   97943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:06:23.666641   97943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:06:23.677490   97943 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:06:23.677512   97943 kubeadm.go:157] found existing configuration files:
	
	I1210 00:06:23.677553   97943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:06:23.685837   97943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:06:23.685897   97943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:06:23.696600   97943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:06:23.706796   97943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:06:23.706854   97943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:06:23.717362   97943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:06:23.727400   97943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:06:23.727453   97943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:06:23.737844   97943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:06:23.747833   97943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:06:23.747889   97943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:06:23.758170   97943 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:06:23.860329   97943 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 00:06:23.860398   97943 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:06:23.982444   97943 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:06:23.982606   97943 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:06:23.982761   97943 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 00:06:23.992051   97943 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:06:24.260435   97943 out.go:235]   - Generating certificates and keys ...
	I1210 00:06:24.260672   97943 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:06:24.260758   97943 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:06:24.260858   97943 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 00:06:24.290159   97943 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1210 00:06:24.463743   97943 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1210 00:06:24.802277   97943 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1210 00:06:24.950429   97943 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1210 00:06:24.950692   97943 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-070032 localhost] and IPs [192.168.39.187 127.0.0.1 ::1]
	I1210 00:06:25.094704   97943 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1210 00:06:25.094857   97943 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-070032 localhost] and IPs [192.168.39.187 127.0.0.1 ::1]
	I1210 00:06:25.315955   97943 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 00:06:25.908434   97943 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 00:06:26.061724   97943 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1210 00:06:26.061977   97943 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:06:26.261701   97943 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:06:26.508681   97943 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 00:06:26.626369   97943 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:06:26.773060   97943 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:06:26.898048   97943 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:06:26.900096   97943 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:06:26.903197   97943 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:06:26.904929   97943 out.go:235]   - Booting up control plane ...
	I1210 00:06:26.905029   97943 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:06:26.905121   97943 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:06:26.905279   97943 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:06:26.919661   97943 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:06:26.926359   97943 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:06:26.926414   97943 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:06:27.050156   97943 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 00:06:27.050350   97943 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 00:06:27.551278   97943 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.620144ms
	I1210 00:06:27.551408   97943 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 00:06:33.591605   97943 kubeadm.go:310] [api-check] The API server is healthy after 6.043312277s
	I1210 00:06:33.609669   97943 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 00:06:33.625260   97943 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 00:06:33.653756   97943 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 00:06:33.653955   97943 kubeadm.go:310] [mark-control-plane] Marking the node ha-070032 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 00:06:33.666679   97943 kubeadm.go:310] [bootstrap-token] Using token: j34izu.9ybowi8hhzn9pxj2
	I1210 00:06:33.668028   97943 out.go:235]   - Configuring RBAC rules ...
	I1210 00:06:33.668176   97943 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 00:06:33.684358   97943 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 00:06:33.695755   97943 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 00:06:33.698959   97943 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 00:06:33.704573   97943 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 00:06:33.710289   97943 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 00:06:34.000325   97943 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 00:06:34.440225   97943 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 00:06:35.001489   97943 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 00:06:35.002397   97943 kubeadm.go:310] 
	I1210 00:06:35.002481   97943 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 00:06:35.002492   97943 kubeadm.go:310] 
	I1210 00:06:35.002620   97943 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 00:06:35.002641   97943 kubeadm.go:310] 
	I1210 00:06:35.002668   97943 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 00:06:35.002729   97943 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 00:06:35.002789   97943 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 00:06:35.002807   97943 kubeadm.go:310] 
	I1210 00:06:35.002880   97943 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 00:06:35.002909   97943 kubeadm.go:310] 
	I1210 00:06:35.002973   97943 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 00:06:35.002982   97943 kubeadm.go:310] 
	I1210 00:06:35.003062   97943 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 00:06:35.003170   97943 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 00:06:35.003276   97943 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 00:06:35.003287   97943 kubeadm.go:310] 
	I1210 00:06:35.003407   97943 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 00:06:35.003521   97943 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 00:06:35.003539   97943 kubeadm.go:310] 
	I1210 00:06:35.003652   97943 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token j34izu.9ybowi8hhzn9pxj2 \
	I1210 00:06:35.003744   97943 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 \
	I1210 00:06:35.003795   97943 kubeadm.go:310] 	--control-plane 
	I1210 00:06:35.003809   97943 kubeadm.go:310] 
	I1210 00:06:35.003925   97943 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 00:06:35.003934   97943 kubeadm.go:310] 
	I1210 00:06:35.004033   97943 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token j34izu.9ybowi8hhzn9pxj2 \
	I1210 00:06:35.004174   97943 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 
	I1210 00:06:35.004857   97943 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:06:35.005000   97943 cni.go:84] Creating CNI manager for ""
	I1210 00:06:35.005014   97943 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1210 00:06:35.006644   97943 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1210 00:06:35.007773   97943 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 00:06:35.013278   97943 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1210 00:06:35.013292   97943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 00:06:35.030575   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 00:06:35.430253   97943 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 00:06:35.430379   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-070032 minikube.k8s.io/updated_at=2024_12_10T00_06_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=ha-070032 minikube.k8s.io/primary=true
	I1210 00:06:35.430379   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:35.453581   97943 ops.go:34] apiserver oom_adj: -16
	I1210 00:06:35.589407   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:36.090147   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:36.590386   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:37.089563   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:37.589509   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:38.090045   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:38.590492   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:38.670226   97943 kubeadm.go:1113] duration metric: took 3.23992517s to wait for elevateKubeSystemPrivileges
	I1210 00:06:38.670279   97943 kubeadm.go:394] duration metric: took 15.062107151s to StartCluster
	I1210 00:06:38.670305   97943 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:38.670408   97943 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:06:38.671197   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:38.671402   97943 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:06:38.671412   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 00:06:38.671420   97943 start.go:241] waiting for startup goroutines ...
	I1210 00:06:38.671426   97943 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 00:06:38.671508   97943 addons.go:69] Setting storage-provisioner=true in profile "ha-070032"
	I1210 00:06:38.671518   97943 addons.go:69] Setting default-storageclass=true in profile "ha-070032"
	I1210 00:06:38.671525   97943 addons.go:234] Setting addon storage-provisioner=true in "ha-070032"
	I1210 00:06:38.671543   97943 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-070032"
	I1210 00:06:38.671557   97943 host.go:66] Checking if "ha-070032" exists ...
	I1210 00:06:38.671580   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:06:38.671976   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:06:38.672006   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:06:38.672032   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:38.672011   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:38.687036   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41675
	I1210 00:06:38.687249   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36941
	I1210 00:06:38.687528   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:38.687798   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:38.688109   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:06:38.688138   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:38.688273   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:06:38.688294   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:38.688523   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:38.688665   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:38.688726   97943 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:06:38.689111   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:06:38.689137   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:38.690837   97943 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:06:38.691061   97943 kapi.go:59] client config for ha-070032: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.crt", KeyFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key", CAFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 00:06:38.691470   97943 cert_rotation.go:140] Starting client certificate rotation controller
	I1210 00:06:38.691733   97943 addons.go:234] Setting addon default-storageclass=true in "ha-070032"
	I1210 00:06:38.691777   97943 host.go:66] Checking if "ha-070032" exists ...
	I1210 00:06:38.692023   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:06:38.692051   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:38.704916   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43013
	I1210 00:06:38.705299   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:38.705773   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:06:38.705793   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:38.705818   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43161
	I1210 00:06:38.706223   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:38.706266   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:38.706378   97943 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:06:38.706814   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:06:38.706838   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:38.707185   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:38.707762   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:06:38.707794   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:38.707810   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:38.709839   97943 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:06:38.711065   97943 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:06:38.711090   97943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 00:06:38.711109   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:38.713927   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:38.714361   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:38.714394   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:38.714642   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:38.714813   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:38.715016   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:38.715175   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:06:38.722431   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33307
	I1210 00:06:38.722864   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:38.723276   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:06:38.723296   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:38.723661   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:38.723828   97943 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:06:38.725166   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:38.725377   97943 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 00:06:38.725391   97943 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 00:06:38.725405   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:38.727990   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:38.728394   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:38.728425   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:38.728556   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:38.728718   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:38.728851   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:38.729006   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:06:38.796897   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 00:06:38.828298   97943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:06:38.901174   97943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 00:06:39.211073   97943 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1210 00:06:39.326332   97943 main.go:141] libmachine: Making call to close driver server
	I1210 00:06:39.326356   97943 main.go:141] libmachine: (ha-070032) Calling .Close
	I1210 00:06:39.326414   97943 main.go:141] libmachine: Making call to close driver server
	I1210 00:06:39.326438   97943 main.go:141] libmachine: (ha-070032) Calling .Close
	I1210 00:06:39.326675   97943 main.go:141] libmachine: (ha-070032) DBG | Closing plugin on server side
	I1210 00:06:39.326704   97943 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:06:39.326718   97943 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:06:39.326722   97943 main.go:141] libmachine: (ha-070032) DBG | Closing plugin on server side
	I1210 00:06:39.326732   97943 main.go:141] libmachine: Making call to close driver server
	I1210 00:06:39.326740   97943 main.go:141] libmachine: (ha-070032) Calling .Close
	I1210 00:06:39.326767   97943 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:06:39.326783   97943 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:06:39.326792   97943 main.go:141] libmachine: Making call to close driver server
	I1210 00:06:39.326799   97943 main.go:141] libmachine: (ha-070032) Calling .Close
	I1210 00:06:39.326952   97943 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:06:39.326963   97943 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:06:39.327027   97943 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:06:39.327032   97943 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 00:06:39.327042   97943 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:06:39.327048   97943 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 00:06:39.327148   97943 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1210 00:06:39.327161   97943 round_trippers.go:469] Request Headers:
	I1210 00:06:39.327179   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:06:39.327194   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:06:39.340698   97943 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1210 00:06:39.341273   97943 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1210 00:06:39.341288   97943 round_trippers.go:469] Request Headers:
	I1210 00:06:39.341295   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:06:39.341298   97943 round_trippers.go:473]     Content-Type: application/json
	I1210 00:06:39.341303   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:06:39.344902   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:06:39.345090   97943 main.go:141] libmachine: Making call to close driver server
	I1210 00:06:39.345105   97943 main.go:141] libmachine: (ha-070032) Calling .Close
	I1210 00:06:39.345391   97943 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:06:39.345413   97943 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:06:39.345420   97943 main.go:141] libmachine: (ha-070032) DBG | Closing plugin on server side
	I1210 00:06:39.347624   97943 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1210 00:06:39.348926   97943 addons.go:510] duration metric: took 677.497681ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 00:06:39.348959   97943 start.go:246] waiting for cluster config update ...
	I1210 00:06:39.348973   97943 start.go:255] writing updated cluster config ...
	I1210 00:06:39.350585   97943 out.go:201] 
	I1210 00:06:39.351879   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:06:39.351939   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:06:39.353507   97943 out.go:177] * Starting "ha-070032-m02" control-plane node in "ha-070032" cluster
	I1210 00:06:39.354653   97943 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:06:39.354670   97943 cache.go:56] Caching tarball of preloaded images
	I1210 00:06:39.354757   97943 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 00:06:39.354768   97943 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 00:06:39.354822   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:06:39.354986   97943 start.go:360] acquireMachinesLock for ha-070032-m02: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:06:39.355029   97943 start.go:364] duration metric: took 24.389µs to acquireMachinesLock for "ha-070032-m02"
	I1210 00:06:39.355043   97943 start.go:93] Provisioning new machine with config: &{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:06:39.355103   97943 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1210 00:06:39.356785   97943 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1210 00:06:39.356859   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:06:39.356884   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:39.373740   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41069
	I1210 00:06:39.374206   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:39.374743   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:06:39.374764   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:39.375056   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:39.375244   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetMachineName
	I1210 00:06:39.375358   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:06:39.375496   97943 start.go:159] libmachine.API.Create for "ha-070032" (driver="kvm2")
	I1210 00:06:39.375520   97943 client.go:168] LocalClient.Create starting
	I1210 00:06:39.375545   97943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem
	I1210 00:06:39.375577   97943 main.go:141] libmachine: Decoding PEM data...
	I1210 00:06:39.375591   97943 main.go:141] libmachine: Parsing certificate...
	I1210 00:06:39.375644   97943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem
	I1210 00:06:39.375662   97943 main.go:141] libmachine: Decoding PEM data...
	I1210 00:06:39.375672   97943 main.go:141] libmachine: Parsing certificate...
	I1210 00:06:39.375686   97943 main.go:141] libmachine: Running pre-create checks...
	I1210 00:06:39.375694   97943 main.go:141] libmachine: (ha-070032-m02) Calling .PreCreateCheck
	I1210 00:06:39.375822   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetConfigRaw
	I1210 00:06:39.376224   97943 main.go:141] libmachine: Creating machine...
	I1210 00:06:39.376240   97943 main.go:141] libmachine: (ha-070032-m02) Calling .Create
	I1210 00:06:39.376365   97943 main.go:141] libmachine: (ha-070032-m02) Creating KVM machine...
	I1210 00:06:39.377639   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found existing default KVM network
	I1210 00:06:39.377788   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found existing private KVM network mk-ha-070032
	I1210 00:06:39.377977   97943 main.go:141] libmachine: (ha-070032-m02) Setting up store path in /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02 ...
	I1210 00:06:39.378006   97943 main.go:141] libmachine: (ha-070032-m02) Building disk image from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1210 00:06:39.378048   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:39.377952   98310 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:06:39.378126   97943 main.go:141] libmachine: (ha-070032-m02) Downloading /home/jenkins/minikube-integration/20062-79135/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1210 00:06:39.655003   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:39.654863   98310 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa...
	I1210 00:06:39.917373   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:39.917261   98310 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/ha-070032-m02.rawdisk...
	I1210 00:06:39.917409   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Writing magic tar header
	I1210 00:06:39.917424   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Writing SSH key tar header
	I1210 00:06:39.917437   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:39.917371   98310 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02 ...
	I1210 00:06:39.917498   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02
	I1210 00:06:39.917529   97943 main.go:141] libmachine: (ha-070032-m02) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02 (perms=drwx------)
	I1210 00:06:39.917548   97943 main.go:141] libmachine: (ha-070032-m02) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines (perms=drwxr-xr-x)
	I1210 00:06:39.917560   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines
	I1210 00:06:39.917572   97943 main.go:141] libmachine: (ha-070032-m02) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube (perms=drwxr-xr-x)
	I1210 00:06:39.917584   97943 main.go:141] libmachine: (ha-070032-m02) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135 (perms=drwxrwxr-x)
	I1210 00:06:39.917605   97943 main.go:141] libmachine: (ha-070032-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 00:06:39.917616   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:06:39.917629   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135
	I1210 00:06:39.917642   97943 main.go:141] libmachine: (ha-070032-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 00:06:39.917652   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1210 00:06:39.917664   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home/jenkins
	I1210 00:06:39.917673   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home
	I1210 00:06:39.917683   97943 main.go:141] libmachine: (ha-070032-m02) Creating domain...
	I1210 00:06:39.917707   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Skipping /home - not owner
	I1210 00:06:39.918676   97943 main.go:141] libmachine: (ha-070032-m02) define libvirt domain using xml: 
	I1210 00:06:39.918698   97943 main.go:141] libmachine: (ha-070032-m02) <domain type='kvm'>
	I1210 00:06:39.918768   97943 main.go:141] libmachine: (ha-070032-m02)   <name>ha-070032-m02</name>
	I1210 00:06:39.918816   97943 main.go:141] libmachine: (ha-070032-m02)   <memory unit='MiB'>2200</memory>
	I1210 00:06:39.918844   97943 main.go:141] libmachine: (ha-070032-m02)   <vcpu>2</vcpu>
	I1210 00:06:39.918860   97943 main.go:141] libmachine: (ha-070032-m02)   <features>
	I1210 00:06:39.918868   97943 main.go:141] libmachine: (ha-070032-m02)     <acpi/>
	I1210 00:06:39.918874   97943 main.go:141] libmachine: (ha-070032-m02)     <apic/>
	I1210 00:06:39.918881   97943 main.go:141] libmachine: (ha-070032-m02)     <pae/>
	I1210 00:06:39.918890   97943 main.go:141] libmachine: (ha-070032-m02)     
	I1210 00:06:39.918898   97943 main.go:141] libmachine: (ha-070032-m02)   </features>
	I1210 00:06:39.918908   97943 main.go:141] libmachine: (ha-070032-m02)   <cpu mode='host-passthrough'>
	I1210 00:06:39.918914   97943 main.go:141] libmachine: (ha-070032-m02)   
	I1210 00:06:39.918920   97943 main.go:141] libmachine: (ha-070032-m02)   </cpu>
	I1210 00:06:39.918932   97943 main.go:141] libmachine: (ha-070032-m02)   <os>
	I1210 00:06:39.918939   97943 main.go:141] libmachine: (ha-070032-m02)     <type>hvm</type>
	I1210 00:06:39.918951   97943 main.go:141] libmachine: (ha-070032-m02)     <boot dev='cdrom'/>
	I1210 00:06:39.918960   97943 main.go:141] libmachine: (ha-070032-m02)     <boot dev='hd'/>
	I1210 00:06:39.918969   97943 main.go:141] libmachine: (ha-070032-m02)     <bootmenu enable='no'/>
	I1210 00:06:39.918978   97943 main.go:141] libmachine: (ha-070032-m02)   </os>
	I1210 00:06:39.918985   97943 main.go:141] libmachine: (ha-070032-m02)   <devices>
	I1210 00:06:39.918996   97943 main.go:141] libmachine: (ha-070032-m02)     <disk type='file' device='cdrom'>
	I1210 00:06:39.919011   97943 main.go:141] libmachine: (ha-070032-m02)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/boot2docker.iso'/>
	I1210 00:06:39.919023   97943 main.go:141] libmachine: (ha-070032-m02)       <target dev='hdc' bus='scsi'/>
	I1210 00:06:39.919034   97943 main.go:141] libmachine: (ha-070032-m02)       <readonly/>
	I1210 00:06:39.919044   97943 main.go:141] libmachine: (ha-070032-m02)     </disk>
	I1210 00:06:39.919053   97943 main.go:141] libmachine: (ha-070032-m02)     <disk type='file' device='disk'>
	I1210 00:06:39.919066   97943 main.go:141] libmachine: (ha-070032-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1210 00:06:39.919085   97943 main.go:141] libmachine: (ha-070032-m02)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/ha-070032-m02.rawdisk'/>
	I1210 00:06:39.919096   97943 main.go:141] libmachine: (ha-070032-m02)       <target dev='hda' bus='virtio'/>
	I1210 00:06:39.919106   97943 main.go:141] libmachine: (ha-070032-m02)     </disk>
	I1210 00:06:39.919113   97943 main.go:141] libmachine: (ha-070032-m02)     <interface type='network'>
	I1210 00:06:39.919121   97943 main.go:141] libmachine: (ha-070032-m02)       <source network='mk-ha-070032'/>
	I1210 00:06:39.919132   97943 main.go:141] libmachine: (ha-070032-m02)       <model type='virtio'/>
	I1210 00:06:39.919140   97943 main.go:141] libmachine: (ha-070032-m02)     </interface>
	I1210 00:06:39.919150   97943 main.go:141] libmachine: (ha-070032-m02)     <interface type='network'>
	I1210 00:06:39.919158   97943 main.go:141] libmachine: (ha-070032-m02)       <source network='default'/>
	I1210 00:06:39.919168   97943 main.go:141] libmachine: (ha-070032-m02)       <model type='virtio'/>
	I1210 00:06:39.919177   97943 main.go:141] libmachine: (ha-070032-m02)     </interface>
	I1210 00:06:39.919187   97943 main.go:141] libmachine: (ha-070032-m02)     <serial type='pty'>
	I1210 00:06:39.919201   97943 main.go:141] libmachine: (ha-070032-m02)       <target port='0'/>
	I1210 00:06:39.919211   97943 main.go:141] libmachine: (ha-070032-m02)     </serial>
	I1210 00:06:39.919220   97943 main.go:141] libmachine: (ha-070032-m02)     <console type='pty'>
	I1210 00:06:39.919230   97943 main.go:141] libmachine: (ha-070032-m02)       <target type='serial' port='0'/>
	I1210 00:06:39.919239   97943 main.go:141] libmachine: (ha-070032-m02)     </console>
	I1210 00:06:39.919249   97943 main.go:141] libmachine: (ha-070032-m02)     <rng model='virtio'>
	I1210 00:06:39.919261   97943 main.go:141] libmachine: (ha-070032-m02)       <backend model='random'>/dev/random</backend>
	I1210 00:06:39.919271   97943 main.go:141] libmachine: (ha-070032-m02)     </rng>
	I1210 00:06:39.919278   97943 main.go:141] libmachine: (ha-070032-m02)     
	I1210 00:06:39.919287   97943 main.go:141] libmachine: (ha-070032-m02)     
	I1210 00:06:39.919296   97943 main.go:141] libmachine: (ha-070032-m02)   </devices>
	I1210 00:06:39.919305   97943 main.go:141] libmachine: (ha-070032-m02) </domain>
	I1210 00:06:39.919315   97943 main.go:141] libmachine: (ha-070032-m02) 
	I1210 00:06:39.926117   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:48:53:e3 in network default
	I1210 00:06:39.926859   97943 main.go:141] libmachine: (ha-070032-m02) Ensuring networks are active...
	I1210 00:06:39.926888   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:39.927703   97943 main.go:141] libmachine: (ha-070032-m02) Ensuring network default is active
	I1210 00:06:39.928027   97943 main.go:141] libmachine: (ha-070032-m02) Ensuring network mk-ha-070032 is active
	I1210 00:06:39.928408   97943 main.go:141] libmachine: (ha-070032-m02) Getting domain xml...
	I1210 00:06:39.929223   97943 main.go:141] libmachine: (ha-070032-m02) Creating domain...
	I1210 00:06:41.130495   97943 main.go:141] libmachine: (ha-070032-m02) Waiting to get IP...
	I1210 00:06:41.131359   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:41.131738   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:41.131767   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:41.131705   98310 retry.go:31] will retry after 310.664463ms: waiting for machine to come up
	I1210 00:06:41.444273   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:41.444703   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:41.444737   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:41.444646   98310 retry.go:31] will retry after 238.189723ms: waiting for machine to come up
	I1210 00:06:41.683967   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:41.684372   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:41.684404   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:41.684311   98310 retry.go:31] will retry after 302.841079ms: waiting for machine to come up
	I1210 00:06:41.988975   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:41.989468   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:41.989592   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:41.989406   98310 retry.go:31] will retry after 546.191287ms: waiting for machine to come up
	I1210 00:06:42.536796   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:42.537343   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:42.537376   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:42.537279   98310 retry.go:31] will retry after 759.959183ms: waiting for machine to come up
	I1210 00:06:43.299192   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:43.299592   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:43.299618   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:43.299550   98310 retry.go:31] will retry after 662.514804ms: waiting for machine to come up
	I1210 00:06:43.963192   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:43.963574   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:43.963604   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:43.963510   98310 retry.go:31] will retry after 928.068602ms: waiting for machine to come up
	I1210 00:06:44.892786   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:44.893282   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:44.893308   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:44.893234   98310 retry.go:31] will retry after 1.121647824s: waiting for machine to come up
	I1210 00:06:46.016637   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:46.017063   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:46.017120   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:46.017054   98310 retry.go:31] will retry after 1.26533881s: waiting for machine to come up
	I1210 00:06:47.283663   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:47.284077   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:47.284103   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:47.284029   98310 retry.go:31] will retry after 1.959318884s: waiting for machine to come up
	I1210 00:06:49.245134   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:49.245690   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:49.245721   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:49.245628   98310 retry.go:31] will retry after 2.080479898s: waiting for machine to come up
	I1210 00:06:51.327593   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:51.327959   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:51.327986   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:51.327912   98310 retry.go:31] will retry after 3.384865721s: waiting for machine to come up
	I1210 00:06:54.714736   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:54.715082   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:54.715116   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:54.715033   98310 retry.go:31] will retry after 4.262963095s: waiting for machine to come up
	I1210 00:06:58.982522   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:58.982919   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:58.982944   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:58.982868   98310 retry.go:31] will retry after 4.754254966s: waiting for machine to come up
	I1210 00:07:03.739570   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:03.740201   97943 main.go:141] libmachine: (ha-070032-m02) Found IP for machine: 192.168.39.198
	I1210 00:07:03.740228   97943 main.go:141] libmachine: (ha-070032-m02) Reserving static IP address...
	I1210 00:07:03.740250   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has current primary IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:03.740875   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find host DHCP lease matching {name: "ha-070032-m02", mac: "52:54:00:a4:53:39", ip: "192.168.39.198"} in network mk-ha-070032
	I1210 00:07:03.810694   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Getting to WaitForSSH function...
	I1210 00:07:03.810726   97943 main.go:141] libmachine: (ha-070032-m02) Reserved static IP address: 192.168.39.198
	I1210 00:07:03.810777   97943 main.go:141] libmachine: (ha-070032-m02) Waiting for SSH to be available...
	I1210 00:07:03.813164   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:03.813481   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032
	I1210 00:07:03.813508   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find defined IP address of network mk-ha-070032 interface with MAC address 52:54:00:a4:53:39
	I1210 00:07:03.813691   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Using SSH client type: external
	I1210 00:07:03.813726   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa (-rw-------)
	I1210 00:07:03.813759   97943 main.go:141] libmachine: (ha-070032-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:07:03.813774   97943 main.go:141] libmachine: (ha-070032-m02) DBG | About to run SSH command:
	I1210 00:07:03.813802   97943 main.go:141] libmachine: (ha-070032-m02) DBG | exit 0
	I1210 00:07:03.817377   97943 main.go:141] libmachine: (ha-070032-m02) DBG | SSH cmd err, output: exit status 255: 
	I1210 00:07:03.817395   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1210 00:07:03.817406   97943 main.go:141] libmachine: (ha-070032-m02) DBG | command : exit 0
	I1210 00:07:03.817413   97943 main.go:141] libmachine: (ha-070032-m02) DBG | err     : exit status 255
	I1210 00:07:03.817429   97943 main.go:141] libmachine: (ha-070032-m02) DBG | output  : 
	I1210 00:07:06.818972   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Getting to WaitForSSH function...
	I1210 00:07:06.821618   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:06.822027   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:06.822055   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:06.822215   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Using SSH client type: external
	I1210 00:07:06.822245   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa (-rw-------)
	I1210 00:07:06.822283   97943 main.go:141] libmachine: (ha-070032-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:07:06.822309   97943 main.go:141] libmachine: (ha-070032-m02) DBG | About to run SSH command:
	I1210 00:07:06.822322   97943 main.go:141] libmachine: (ha-070032-m02) DBG | exit 0
	I1210 00:07:06.950206   97943 main.go:141] libmachine: (ha-070032-m02) DBG | SSH cmd err, output: <nil>: 
	I1210 00:07:06.950523   97943 main.go:141] libmachine: (ha-070032-m02) KVM machine creation complete!
	I1210 00:07:06.950797   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetConfigRaw
	I1210 00:07:06.951365   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:06.951576   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:06.951700   97943 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1210 00:07:06.951712   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetState
	I1210 00:07:06.952852   97943 main.go:141] libmachine: Detecting operating system of created instance...
	I1210 00:07:06.952870   97943 main.go:141] libmachine: Waiting for SSH to be available...
	I1210 00:07:06.952875   97943 main.go:141] libmachine: Getting to WaitForSSH function...
	I1210 00:07:06.952881   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:06.955132   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:06.955556   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:06.955577   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:06.955708   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:06.955904   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:06.956047   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:06.956157   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:06.956344   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:07:06.956613   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1210 00:07:06.956635   97943 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1210 00:07:07.065432   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:07:07.065465   97943 main.go:141] libmachine: Detecting the provisioner...
	I1210 00:07:07.065472   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:07.068281   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.068647   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:07.068676   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.068789   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:07.069000   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:07.069205   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:07.069353   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:07.069507   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:07:07.069682   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1210 00:07:07.069696   97943 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1210 00:07:07.179172   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1210 00:07:07.179254   97943 main.go:141] libmachine: found compatible host: buildroot
	I1210 00:07:07.179270   97943 main.go:141] libmachine: Provisioning with buildroot...
	I1210 00:07:07.179281   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetMachineName
	I1210 00:07:07.179507   97943 buildroot.go:166] provisioning hostname "ha-070032-m02"
	I1210 00:07:07.179525   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetMachineName
	I1210 00:07:07.179714   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:07.182380   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.182709   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:07.182735   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.182903   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:07.183097   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:07.183236   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:07.183392   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:07.183547   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:07:07.183709   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1210 00:07:07.183720   97943 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-070032-m02 && echo "ha-070032-m02" | sudo tee /etc/hostname
	I1210 00:07:07.308107   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-070032-m02
	
	I1210 00:07:07.308157   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:07.310796   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.311128   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:07.311159   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.311367   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:07.311544   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:07.311697   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:07.311834   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:07.312007   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:07:07.312178   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1210 00:07:07.312195   97943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-070032-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-070032-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-070032-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 00:07:07.430746   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:07:07.430783   97943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 00:07:07.430808   97943 buildroot.go:174] setting up certificates
	I1210 00:07:07.430826   97943 provision.go:84] configureAuth start
	I1210 00:07:07.430840   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetMachineName
	I1210 00:07:07.431122   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetIP
	I1210 00:07:07.433939   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.434313   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:07.434337   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.434511   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:07.436908   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.437220   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:07.437245   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.437409   97943 provision.go:143] copyHostCerts
	I1210 00:07:07.437448   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:07:07.437491   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 00:07:07.437503   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:07:07.437576   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 00:07:07.437681   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:07:07.437707   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 00:07:07.437715   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:07:07.437755   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 00:07:07.437820   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:07:07.437852   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 00:07:07.437861   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:07:07.437895   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 00:07:07.437968   97943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.ha-070032-m02 san=[127.0.0.1 192.168.39.198 ha-070032-m02 localhost minikube]
	I1210 00:07:08.044773   97943 provision.go:177] copyRemoteCerts
	I1210 00:07:08.044851   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 00:07:08.044891   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:08.047538   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.047846   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.047877   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.048076   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:08.048336   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.048503   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:08.048649   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa Username:docker}
	I1210 00:07:08.132237   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 00:07:08.132310   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 00:07:08.154520   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 00:07:08.154605   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 00:07:08.175951   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 00:07:08.176034   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 00:07:08.197284   97943 provision.go:87] duration metric: took 766.441651ms to configureAuth
	I1210 00:07:08.197318   97943 buildroot.go:189] setting minikube options for container-runtime
	I1210 00:07:08.197534   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:07:08.197630   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:08.200256   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.200605   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.200631   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.200777   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:08.200956   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.201156   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.201290   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:08.201439   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:07:08.201609   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1210 00:07:08.201622   97943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 00:07:08.422427   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 00:07:08.422470   97943 main.go:141] libmachine: Checking connection to Docker...
	I1210 00:07:08.422479   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetURL
	I1210 00:07:08.423873   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Using libvirt version 6000000
	I1210 00:07:08.426057   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.426388   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.426419   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.426586   97943 main.go:141] libmachine: Docker is up and running!
	I1210 00:07:08.426605   97943 main.go:141] libmachine: Reticulating splines...
	I1210 00:07:08.426616   97943 client.go:171] duration metric: took 29.051087497s to LocalClient.Create
	I1210 00:07:08.426651   97943 start.go:167] duration metric: took 29.051156503s to libmachine.API.Create "ha-070032"
	I1210 00:07:08.426663   97943 start.go:293] postStartSetup for "ha-070032-m02" (driver="kvm2")
	I1210 00:07:08.426676   97943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 00:07:08.426697   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:08.426973   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 00:07:08.427006   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:08.429163   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.429425   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.429445   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.429585   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:08.429771   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.429939   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:08.430073   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa Username:docker}
	I1210 00:07:08.511841   97943 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 00:07:08.515628   97943 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 00:07:08.515647   97943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 00:07:08.515716   97943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 00:07:08.515790   97943 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 00:07:08.515798   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /etc/ssl/certs/862962.pem
	I1210 00:07:08.515877   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 00:07:08.524177   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:07:08.545083   97943 start.go:296] duration metric: took 118.406585ms for postStartSetup
	I1210 00:07:08.545129   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetConfigRaw
	I1210 00:07:08.545727   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetIP
	I1210 00:07:08.548447   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.548762   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.548790   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.549019   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:07:08.549239   97943 start.go:128] duration metric: took 29.194124447s to createHost
	I1210 00:07:08.549263   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:08.551249   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.551581   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.551601   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.551788   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:08.551950   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.552104   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.552224   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:08.552368   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:07:08.552535   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1210 00:07:08.552544   97943 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 00:07:08.658708   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733789228.640009863
	
	I1210 00:07:08.658732   97943 fix.go:216] guest clock: 1733789228.640009863
	I1210 00:07:08.658742   97943 fix.go:229] Guest: 2024-12-10 00:07:08.640009863 +0000 UTC Remote: 2024-12-10 00:07:08.549251378 +0000 UTC m=+75.795332018 (delta=90.758485ms)
	I1210 00:07:08.658764   97943 fix.go:200] guest clock delta is within tolerance: 90.758485ms
	I1210 00:07:08.658772   97943 start.go:83] releasing machines lock for "ha-070032-m02", held for 29.303735455s
	I1210 00:07:08.658798   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:08.659077   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetIP
	I1210 00:07:08.661426   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.661743   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.661779   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.663916   97943 out.go:177] * Found network options:
	I1210 00:07:08.665147   97943 out.go:177]   - NO_PROXY=192.168.39.187
	W1210 00:07:08.666190   97943 proxy.go:119] fail to check proxy env: Error ip not in block
	I1210 00:07:08.666213   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:08.666724   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:08.666867   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:08.666999   97943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 00:07:08.667045   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	W1210 00:07:08.667058   97943 proxy.go:119] fail to check proxy env: Error ip not in block
	I1210 00:07:08.667145   97943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 00:07:08.667170   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:08.669614   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.669829   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.669978   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.670007   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.670104   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:08.670217   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.670241   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.670281   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.670437   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:08.670446   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:08.670629   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.670648   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa Username:docker}
	I1210 00:07:08.670779   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:08.670926   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa Username:docker}
	I1210 00:07:08.901492   97943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 00:07:08.907747   97943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 00:07:08.907817   97943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 00:07:08.923205   97943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 00:07:08.923229   97943 start.go:495] detecting cgroup driver to use...
	I1210 00:07:08.923295   97943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 00:07:08.937553   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 00:07:08.950281   97943 docker.go:217] disabling cri-docker service (if available) ...
	I1210 00:07:08.950346   97943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 00:07:08.962860   97943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 00:07:08.975314   97943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 00:07:09.086709   97943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 00:07:09.237022   97943 docker.go:233] disabling docker service ...
	I1210 00:07:09.237103   97943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 00:07:09.249910   97943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 00:07:09.261842   97943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 00:07:09.377487   97943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 00:07:09.489077   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 00:07:09.503310   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 00:07:09.520074   97943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 00:07:09.520146   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.529237   97943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 00:07:09.529299   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.538814   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.547790   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.557022   97943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 00:07:09.566274   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.575677   97943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.591166   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.600226   97943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 00:07:09.608899   97943 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 00:07:09.608959   97943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 00:07:09.621054   97943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 00:07:09.630324   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:07:09.745895   97943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 00:07:09.836812   97943 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 00:07:09.836886   97943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 00:07:09.841320   97943 start.go:563] Will wait 60s for crictl version
	I1210 00:07:09.841380   97943 ssh_runner.go:195] Run: which crictl
	I1210 00:07:09.845003   97943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:07:09.887045   97943 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 00:07:09.887158   97943 ssh_runner.go:195] Run: crio --version
	I1210 00:07:09.913628   97943 ssh_runner.go:195] Run: crio --version
	I1210 00:07:09.940544   97943 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 00:07:09.941808   97943 out.go:177]   - env NO_PROXY=192.168.39.187
	I1210 00:07:09.942959   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetIP
	I1210 00:07:09.945644   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:09.946026   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:09.946058   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:09.946322   97943 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 00:07:09.950215   97943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:07:09.961995   97943 mustload.go:65] Loading cluster: ha-070032
	I1210 00:07:09.962176   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:07:09.962427   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:07:09.962471   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:07:09.977140   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34015
	I1210 00:07:09.977521   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:07:09.978002   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:07:09.978024   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:07:09.978339   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:07:09.978526   97943 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:07:09.979937   97943 host.go:66] Checking if "ha-070032" exists ...
	I1210 00:07:09.980239   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:07:09.980281   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:07:09.994247   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36787
	I1210 00:07:09.994760   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:07:09.995248   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:07:09.995276   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:07:09.995617   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:07:09.995804   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:07:09.995981   97943 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032 for IP: 192.168.39.198
	I1210 00:07:09.995996   97943 certs.go:194] generating shared ca certs ...
	I1210 00:07:09.996013   97943 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:07:09.996181   97943 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 00:07:09.996237   97943 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 00:07:09.996250   97943 certs.go:256] generating profile certs ...
	I1210 00:07:09.996340   97943 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key
	I1210 00:07:09.996369   97943 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.f9753880
	I1210 00:07:09.996386   97943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.f9753880 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.187 192.168.39.198 192.168.39.254]
	I1210 00:07:10.076485   97943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.f9753880 ...
	I1210 00:07:10.076513   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.f9753880: {Name:mk063fa61de97dbebc815f8cdc0b8ad5f6ad42dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:07:10.076683   97943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.f9753880 ...
	I1210 00:07:10.076697   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.f9753880: {Name:mk6197070a633b3c7bff009f36273929319901d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:07:10.076768   97943 certs.go:381] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.f9753880 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt
	I1210 00:07:10.076894   97943 certs.go:385] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.f9753880 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key
	I1210 00:07:10.077019   97943 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key
	I1210 00:07:10.077036   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 00:07:10.077051   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 00:07:10.077064   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 00:07:10.077079   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 00:07:10.077092   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 00:07:10.077105   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 00:07:10.077118   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 00:07:10.077130   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 00:07:10.077177   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 00:07:10.077207   97943 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 00:07:10.077219   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:07:10.077240   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 00:07:10.077261   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:07:10.077283   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 00:07:10.077318   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:07:10.077343   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:07:10.077356   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem -> /usr/share/ca-certificates/86296.pem
	I1210 00:07:10.077368   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /usr/share/ca-certificates/862962.pem
	I1210 00:07:10.077402   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:07:10.080314   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:07:10.080656   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:07:10.080686   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:07:10.080849   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:07:10.081053   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:07:10.081213   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:07:10.081346   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:07:10.150955   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1210 00:07:10.156109   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1210 00:07:10.172000   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1210 00:07:10.175843   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1210 00:07:10.191569   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1210 00:07:10.195845   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1210 00:07:10.205344   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1210 00:07:10.208990   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1210 00:07:10.218513   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1210 00:07:10.222172   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1210 00:07:10.231444   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1210 00:07:10.235751   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1210 00:07:10.245673   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:07:10.268586   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 00:07:10.289301   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:07:10.309755   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:07:10.330372   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1210 00:07:10.350734   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 00:07:10.370944   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:07:10.391160   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:07:10.411354   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:07:10.431480   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 00:07:10.453051   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 00:07:10.473317   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1210 00:07:10.487731   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1210 00:07:10.501999   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1210 00:07:10.516876   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1210 00:07:10.531860   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1210 00:07:10.546723   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1210 00:07:10.561653   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1210 00:07:10.575903   97943 ssh_runner.go:195] Run: openssl version
	I1210 00:07:10.580966   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 00:07:10.590633   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 00:07:10.594516   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 00:07:10.594555   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 00:07:10.599765   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:07:10.609423   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:07:10.619123   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:07:10.623118   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:07:10.623159   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:07:10.628240   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:07:10.637834   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 00:07:10.647418   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 00:07:10.651160   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 00:07:10.651204   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 00:07:10.656233   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 00:07:10.666013   97943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:07:10.669458   97943 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 00:07:10.669508   97943 kubeadm.go:934] updating node {m02 192.168.39.198 8443 v1.31.2 crio true true} ...
	I1210 00:07:10.669598   97943 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-070032-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:07:10.669628   97943 kube-vip.go:115] generating kube-vip config ...
	I1210 00:07:10.669651   97943 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1210 00:07:10.689973   97943 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1210 00:07:10.690046   97943 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1210 00:07:10.690097   97943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:07:10.699806   97943 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1210 00:07:10.699859   97943 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1210 00:07:10.709208   97943 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1210 00:07:10.709234   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1210 00:07:10.709289   97943 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1210 00:07:10.709322   97943 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1210 00:07:10.709296   97943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1210 00:07:10.713239   97943 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1210 00:07:10.713260   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1210 00:07:11.639149   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1210 00:07:11.639234   97943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1210 00:07:11.643871   97943 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1210 00:07:11.643902   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1210 00:07:11.758059   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:07:11.787926   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1210 00:07:11.788041   97943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1210 00:07:11.795093   97943 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1210 00:07:11.795140   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1210 00:07:12.180780   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1210 00:07:12.189342   97943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1210 00:07:12.205977   97943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:07:12.220614   97943 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1210 00:07:12.235844   97943 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1210 00:07:12.239089   97943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:07:12.251338   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:07:12.381143   97943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:07:12.396098   97943 host.go:66] Checking if "ha-070032" exists ...
	I1210 00:07:12.396594   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:07:12.396651   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:07:12.412619   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39619
	I1210 00:07:12.413166   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:07:12.413744   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:07:12.413766   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:07:12.414184   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:07:12.414391   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:07:12.414627   97943 start.go:317] joinCluster: &{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:07:12.414728   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1210 00:07:12.414747   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:07:12.418002   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:07:12.418418   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:07:12.418450   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:07:12.418629   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:07:12.418810   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:07:12.418994   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:07:12.419164   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:07:12.570827   97943 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:07:12.570886   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tdi3w2.l01zdw261ipf0ila --discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-070032-m02 --control-plane --apiserver-advertise-address=192.168.39.198 --apiserver-bind-port=8443"
	I1210 00:07:32.921639   97943 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tdi3w2.l01zdw261ipf0ila --discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-070032-m02 --control-plane --apiserver-advertise-address=192.168.39.198 --apiserver-bind-port=8443": (20.350728679s)
	I1210 00:07:32.921682   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1210 00:07:33.411739   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-070032-m02 minikube.k8s.io/updated_at=2024_12_10T00_07_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=ha-070032 minikube.k8s.io/primary=false
	I1210 00:07:33.552589   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-070032-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1210 00:07:33.681991   97943 start.go:319] duration metric: took 21.26735926s to joinCluster
	I1210 00:07:33.682079   97943 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:07:33.682486   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:07:33.683556   97943 out.go:177] * Verifying Kubernetes components...
	I1210 00:07:33.684723   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:07:33.911972   97943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:07:33.951142   97943 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:07:33.951400   97943 kapi.go:59] client config for ha-070032: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.crt", KeyFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key", CAFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1210 00:07:33.951471   97943 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.187:8443
	I1210 00:07:33.951667   97943 node_ready.go:35] waiting up to 6m0s for node "ha-070032-m02" to be "Ready" ...
	I1210 00:07:33.951780   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:33.951788   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:33.951796   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:33.951800   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:33.961739   97943 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1210 00:07:34.452167   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:34.452198   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:34.452211   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:34.452219   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:34.456196   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:34.952070   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:34.952094   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:34.952105   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:34.952111   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:34.957522   97943 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1210 00:07:35.452860   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:35.452883   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:35.452890   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:35.452894   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:35.456005   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:35.952021   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:35.952048   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:35.952058   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:35.952063   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:35.955318   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:35.955854   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:36.452184   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:36.452211   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:36.452222   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:36.452229   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:36.455126   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:36.951926   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:36.951955   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:36.951966   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:36.951973   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:36.956909   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:07:37.452305   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:37.452330   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:37.452341   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:37.452348   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:37.458679   97943 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1210 00:07:37.952074   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:37.952096   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:37.952105   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:37.952111   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:37.954863   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:38.452953   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:38.452983   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:38.452996   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:38.453003   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:38.455946   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:38.456796   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:38.952594   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:38.952617   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:38.952626   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:38.952630   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:38.955438   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:39.452632   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:39.452657   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:39.452669   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:39.452675   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:39.455716   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:39.952848   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:39.952879   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:39.952893   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:39.952899   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:39.956221   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:40.452071   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:40.452095   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:40.452105   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:40.452112   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:40.455375   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:40.952464   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:40.952488   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:40.952507   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:40.952512   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:40.955445   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:40.956051   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:41.452509   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:41.452534   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:41.452542   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:41.452547   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:41.455649   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:41.952634   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:41.952657   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:41.952666   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:41.952669   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:41.955344   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:42.452001   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:42.452023   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:42.452032   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:42.452036   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:42.454753   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:42.952401   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:42.952423   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:42.952436   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:42.952440   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:42.955178   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:43.451951   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:43.451974   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:43.451982   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:43.451986   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:43.454333   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:43.454867   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:43.951938   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:43.951963   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:43.951973   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:43.951978   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:43.954971   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:44.452196   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:44.452218   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:44.452225   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:44.452230   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:44.455145   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:44.952295   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:44.952319   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:44.952327   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:44.952331   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:44.955347   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:45.452137   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:45.452165   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:45.452176   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:45.452181   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:45.477510   97943 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I1210 00:07:45.477938   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:45.952299   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:45.952324   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:45.952332   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:45.952335   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:45.955321   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:46.452358   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:46.452384   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:46.452393   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:46.452397   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:46.455541   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:46.952608   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:46.952634   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:46.952643   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:46.952647   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:46.957412   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:07:47.452449   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:47.452471   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:47.452480   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:47.452484   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:47.455610   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:47.952117   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:47.952140   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:47.952153   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:47.952158   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:47.955292   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:47.956098   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:48.452506   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:48.452532   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:48.452539   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:48.452543   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:48.455102   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:48.952221   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:48.952248   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:48.952258   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:48.952265   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:48.955311   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:49.452304   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:49.452327   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:49.452335   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:49.452340   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:49.455564   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:49.952482   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:49.952504   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:49.952512   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:49.952516   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:49.955476   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:50.452216   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:50.452240   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:50.452248   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:50.452252   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:50.455231   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:50.455908   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:50.952301   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:50.952323   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:50.952331   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:50.952335   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:50.955916   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:51.452010   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:51.452030   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.452039   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.452042   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.454528   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.455097   97943 node_ready.go:49] node "ha-070032-m02" has status "Ready":"True"
	I1210 00:07:51.455120   97943 node_ready.go:38] duration metric: took 17.50342824s for node "ha-070032-m02" to be "Ready" ...
	I1210 00:07:51.455132   97943 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:07:51.455240   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:07:51.455254   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.455263   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.455267   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.459208   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:51.466339   97943 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fs6l6" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.466409   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs6l6
	I1210 00:07:51.466417   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.466423   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.466427   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.469050   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.469653   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:51.469667   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.469674   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.469678   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.472023   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.472637   97943 pod_ready.go:93] pod "coredns-7c65d6cfc9-fs6l6" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:51.472656   97943 pod_ready.go:82] duration metric: took 6.295928ms for pod "coredns-7c65d6cfc9-fs6l6" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.472667   97943 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nqnhw" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.472740   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nqnhw
	I1210 00:07:51.472751   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.472759   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.472768   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.475075   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.475717   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:51.475733   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.475739   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.475743   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.477769   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.478274   97943 pod_ready.go:93] pod "coredns-7c65d6cfc9-nqnhw" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:51.478291   97943 pod_ready.go:82] duration metric: took 5.614539ms for pod "coredns-7c65d6cfc9-nqnhw" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.478301   97943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.478367   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/etcd-ha-070032
	I1210 00:07:51.478379   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.478388   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.478394   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.480522   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.481177   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:51.481192   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.481202   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.481209   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.483181   97943 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1210 00:07:51.483658   97943 pod_ready.go:93] pod "etcd-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:51.483673   97943 pod_ready.go:82] duration metric: took 5.36618ms for pod "etcd-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.483680   97943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.483721   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/etcd-ha-070032-m02
	I1210 00:07:51.483729   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.483736   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.483740   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.485816   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.486281   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:51.486294   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.486301   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.486305   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.488586   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.489007   97943 pod_ready.go:93] pod "etcd-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:51.489022   97943 pod_ready.go:82] duration metric: took 5.33676ms for pod "etcd-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.489033   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.652421   97943 request.go:632] Waited for 163.314648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032
	I1210 00:07:51.652507   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032
	I1210 00:07:51.652514   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.652522   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.652529   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.655875   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:51.852945   97943 request.go:632] Waited for 196.352422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:51.853007   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:51.853013   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.853021   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.853024   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.855755   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.856291   97943 pod_ready.go:93] pod "kube-apiserver-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:51.856309   97943 pod_ready.go:82] duration metric: took 367.27061ms for pod "kube-apiserver-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.856319   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:52.052337   97943 request.go:632] Waited for 195.923221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032-m02
	I1210 00:07:52.052427   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032-m02
	I1210 00:07:52.052445   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:52.052456   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:52.052464   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:52.055099   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:52.252077   97943 request.go:632] Waited for 196.296135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:52.252149   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:52.252156   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:52.252167   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:52.252174   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:52.255050   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:52.255574   97943 pod_ready.go:93] pod "kube-apiserver-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:52.255594   97943 pod_ready.go:82] duration metric: took 399.267887ms for pod "kube-apiserver-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:52.255606   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:52.452073   97943 request.go:632] Waited for 196.39546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032
	I1210 00:07:52.452157   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032
	I1210 00:07:52.452173   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:52.452186   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:52.452244   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:52.458811   97943 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1210 00:07:52.652632   97943 request.go:632] Waited for 193.214443ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:52.652697   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:52.652702   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:52.652711   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:52.652716   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:52.655373   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:52.655983   97943 pod_ready.go:93] pod "kube-controller-manager-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:52.656003   97943 pod_ready.go:82] duration metric: took 400.387415ms for pod "kube-controller-manager-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:52.656017   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:52.852497   97943 request.go:632] Waited for 196.400538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032-m02
	I1210 00:07:52.852597   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032-m02
	I1210 00:07:52.852602   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:52.852610   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:52.852615   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:52.855857   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:53.052833   97943 request.go:632] Waited for 196.298843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:53.052897   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:53.052903   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:53.052910   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:53.052914   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:53.055870   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:53.056472   97943 pod_ready.go:93] pod "kube-controller-manager-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:53.056497   97943 pod_ready.go:82] duration metric: took 400.471759ms for pod "kube-controller-manager-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:53.056510   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7fm88" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:53.252421   97943 request.go:632] Waited for 195.828491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7fm88
	I1210 00:07:53.252528   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7fm88
	I1210 00:07:53.252541   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:53.252551   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:53.252557   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:53.255434   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:53.452445   97943 request.go:632] Waited for 196.391925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:53.452546   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:53.452560   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:53.452570   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:53.452575   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:53.456118   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:53.456572   97943 pod_ready.go:93] pod "kube-proxy-7fm88" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:53.456590   97943 pod_ready.go:82] duration metric: took 400.071362ms for pod "kube-proxy-7fm88" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:53.456605   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xsxdp" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:53.652799   97943 request.go:632] Waited for 196.033566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsxdp
	I1210 00:07:53.652870   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsxdp
	I1210 00:07:53.652877   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:53.652889   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:53.652897   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:53.656566   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:53.852630   97943 request.go:632] Waited for 195.347256ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:53.852735   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:53.852743   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:53.852750   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:53.852754   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:53.856029   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:53.856560   97943 pod_ready.go:93] pod "kube-proxy-xsxdp" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:53.856580   97943 pod_ready.go:82] duration metric: took 399.967291ms for pod "kube-proxy-xsxdp" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:53.856593   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:54.052778   97943 request.go:632] Waited for 196.074454ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032
	I1210 00:07:54.052856   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032
	I1210 00:07:54.052864   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:54.052876   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:54.052886   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:54.056269   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:54.252099   97943 request.go:632] Waited for 195.297548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:54.252166   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:54.252172   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:54.252179   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:54.252194   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:54.256109   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:54.256828   97943 pod_ready.go:93] pod "kube-scheduler-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:54.256845   97943 pod_ready.go:82] duration metric: took 400.243574ms for pod "kube-scheduler-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:54.256855   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:54.452369   97943 request.go:632] Waited for 195.428155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032-m02
	I1210 00:07:54.452450   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032-m02
	I1210 00:07:54.452455   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:54.452462   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:54.452469   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:54.455694   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:54.652684   97943 request.go:632] Waited for 196.354028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:54.652789   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:54.652798   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:54.652807   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:54.652815   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:54.655871   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:54.656329   97943 pod_ready.go:93] pod "kube-scheduler-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:54.656346   97943 pod_ready.go:82] duration metric: took 399.484539ms for pod "kube-scheduler-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:54.656357   97943 pod_ready.go:39] duration metric: took 3.201198757s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:07:54.656372   97943 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:07:54.656424   97943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:07:54.671199   97943 api_server.go:72] duration metric: took 20.989077821s to wait for apiserver process to appear ...
	I1210 00:07:54.671227   97943 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:07:54.671247   97943 api_server.go:253] Checking apiserver healthz at https://192.168.39.187:8443/healthz ...
	I1210 00:07:54.675276   97943 api_server.go:279] https://192.168.39.187:8443/healthz returned 200:
	ok
	I1210 00:07:54.675337   97943 round_trippers.go:463] GET https://192.168.39.187:8443/version
	I1210 00:07:54.675341   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:54.675349   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:54.675356   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:54.676142   97943 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1210 00:07:54.676268   97943 api_server.go:141] control plane version: v1.31.2
	I1210 00:07:54.676284   97943 api_server.go:131] duration metric: took 5.052294ms to wait for apiserver health ...
	I1210 00:07:54.676295   97943 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:07:54.852698   97943 request.go:632] Waited for 176.309011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:07:54.852754   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:07:54.852758   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:54.852767   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:54.852774   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:54.857339   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:07:54.861880   97943 system_pods.go:59] 17 kube-system pods found
	I1210 00:07:54.861907   97943 system_pods.go:61] "coredns-7c65d6cfc9-fs6l6" [9a1cf8b4-0a76-41a1-930f-1574e31db324] Running
	I1210 00:07:54.861912   97943 system_pods.go:61] "coredns-7c65d6cfc9-nqnhw" [2c81e85b-ea31-43b6-9467-97ff40b0b4a0] Running
	I1210 00:07:54.861916   97943 system_pods.go:61] "etcd-ha-070032" [db08368f-b4a3-4d4b-8863-3a2bef1f832b] Running
	I1210 00:07:54.861920   97943 system_pods.go:61] "etcd-ha-070032-m02" [22d5e0f8-0395-40fe-b734-1718a89c251a] Running
	I1210 00:07:54.861952   97943 system_pods.go:61] "kindnet-69btk" [23838518-3372-48bc-986e-e4688e0963bb] Running
	I1210 00:07:54.861962   97943 system_pods.go:61] "kindnet-r97q9" [566672d9-989b-4337-84f4-bafd5c70755f] Running
	I1210 00:07:54.861965   97943 system_pods.go:61] "kube-apiserver-ha-070032" [e06e8916-31d6-4690-bc97-ed55126af827] Running
	I1210 00:07:54.861969   97943 system_pods.go:61] "kube-apiserver-ha-070032-m02" [b2c543a7-2d54-44a4-9369-3115629d79bb] Running
	I1210 00:07:54.861972   97943 system_pods.go:61] "kube-controller-manager-ha-070032" [d01a145f-816f-41ce-83db-c81713564109] Running
	I1210 00:07:54.861979   97943 system_pods.go:61] "kube-controller-manager-ha-070032-m02" [d8af77ba-8496-4383-a8a8-387110a2a026] Running
	I1210 00:07:54.861982   97943 system_pods.go:61] "kube-proxy-7fm88" [e935cde6-5a4b-4387-93a9-26ca701c54ac] Running
	I1210 00:07:54.861985   97943 system_pods.go:61] "kube-proxy-xsxdp" [9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5] Running
	I1210 00:07:54.861988   97943 system_pods.go:61] "kube-scheduler-ha-070032" [28986558-f062-4f8e-9fb1-413a22311d7d] Running
	I1210 00:07:54.861992   97943 system_pods.go:61] "kube-scheduler-ha-070032-m02" [0600e010-0e55-44d9-ac1f-1e62c0139dd1] Running
	I1210 00:07:54.861997   97943 system_pods.go:61] "kube-vip-ha-070032" [dcecf230-f635-42a1-8fd4-567e3409d086] Running
	I1210 00:07:54.862000   97943 system_pods.go:61] "kube-vip-ha-070032-m02" [af656c29-7151-4ef7-8fa6-91187358671e] Running
	I1210 00:07:54.862003   97943 system_pods.go:61] "storage-provisioner" [920be345-6e4a-425f-bcde-0e5463fd92b7] Running
	I1210 00:07:54.862009   97943 system_pods.go:74] duration metric: took 185.705934ms to wait for pod list to return data ...
	I1210 00:07:54.862019   97943 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:07:55.052828   97943 request.go:632] Waited for 190.716484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/default/serviceaccounts
	I1210 00:07:55.052905   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/default/serviceaccounts
	I1210 00:07:55.052910   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:55.052920   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:55.052925   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:55.056476   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:55.056707   97943 default_sa.go:45] found service account: "default"
	I1210 00:07:55.056722   97943 default_sa.go:55] duration metric: took 194.697141ms for default service account to be created ...
	I1210 00:07:55.056734   97943 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:07:55.252140   97943 request.go:632] Waited for 195.318975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:07:55.252222   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:07:55.252228   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:55.252235   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:55.252246   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:55.256177   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:55.260950   97943 system_pods.go:86] 17 kube-system pods found
	I1210 00:07:55.260986   97943 system_pods.go:89] "coredns-7c65d6cfc9-fs6l6" [9a1cf8b4-0a76-41a1-930f-1574e31db324] Running
	I1210 00:07:55.260993   97943 system_pods.go:89] "coredns-7c65d6cfc9-nqnhw" [2c81e85b-ea31-43b6-9467-97ff40b0b4a0] Running
	I1210 00:07:55.260998   97943 system_pods.go:89] "etcd-ha-070032" [db08368f-b4a3-4d4b-8863-3a2bef1f832b] Running
	I1210 00:07:55.261002   97943 system_pods.go:89] "etcd-ha-070032-m02" [22d5e0f8-0395-40fe-b734-1718a89c251a] Running
	I1210 00:07:55.261005   97943 system_pods.go:89] "kindnet-69btk" [23838518-3372-48bc-986e-e4688e0963bb] Running
	I1210 00:07:55.261009   97943 system_pods.go:89] "kindnet-r97q9" [566672d9-989b-4337-84f4-bafd5c70755f] Running
	I1210 00:07:55.261013   97943 system_pods.go:89] "kube-apiserver-ha-070032" [e06e8916-31d6-4690-bc97-ed55126af827] Running
	I1210 00:07:55.261017   97943 system_pods.go:89] "kube-apiserver-ha-070032-m02" [b2c543a7-2d54-44a4-9369-3115629d79bb] Running
	I1210 00:07:55.261021   97943 system_pods.go:89] "kube-controller-manager-ha-070032" [d01a145f-816f-41ce-83db-c81713564109] Running
	I1210 00:07:55.261025   97943 system_pods.go:89] "kube-controller-manager-ha-070032-m02" [d8af77ba-8496-4383-a8a8-387110a2a026] Running
	I1210 00:07:55.261028   97943 system_pods.go:89] "kube-proxy-7fm88" [e935cde6-5a4b-4387-93a9-26ca701c54ac] Running
	I1210 00:07:55.261032   97943 system_pods.go:89] "kube-proxy-xsxdp" [9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5] Running
	I1210 00:07:55.261035   97943 system_pods.go:89] "kube-scheduler-ha-070032" [28986558-f062-4f8e-9fb1-413a22311d7d] Running
	I1210 00:07:55.261038   97943 system_pods.go:89] "kube-scheduler-ha-070032-m02" [0600e010-0e55-44d9-ac1f-1e62c0139dd1] Running
	I1210 00:07:55.261041   97943 system_pods.go:89] "kube-vip-ha-070032" [dcecf230-f635-42a1-8fd4-567e3409d086] Running
	I1210 00:07:55.261044   97943 system_pods.go:89] "kube-vip-ha-070032-m02" [af656c29-7151-4ef7-8fa6-91187358671e] Running
	I1210 00:07:55.261047   97943 system_pods.go:89] "storage-provisioner" [920be345-6e4a-425f-bcde-0e5463fd92b7] Running
	I1210 00:07:55.261054   97943 system_pods.go:126] duration metric: took 204.311621ms to wait for k8s-apps to be running ...
	I1210 00:07:55.261063   97943 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:07:55.261104   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:07:55.274767   97943 system_svc.go:56] duration metric: took 13.694234ms WaitForService to wait for kubelet
	I1210 00:07:55.274800   97943 kubeadm.go:582] duration metric: took 21.592682957s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:07:55.274820   97943 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:07:55.452205   97943 request.go:632] Waited for 177.292861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes
	I1210 00:07:55.452266   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes
	I1210 00:07:55.452271   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:55.452278   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:55.452283   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:55.455802   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:55.456649   97943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:07:55.456674   97943 node_conditions.go:123] node cpu capacity is 2
	I1210 00:07:55.456687   97943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:07:55.456691   97943 node_conditions.go:123] node cpu capacity is 2
	I1210 00:07:55.456696   97943 node_conditions.go:105] duration metric: took 181.87045ms to run NodePressure ...
	I1210 00:07:55.456708   97943 start.go:241] waiting for startup goroutines ...
	I1210 00:07:55.456739   97943 start.go:255] writing updated cluster config ...
	I1210 00:07:55.458841   97943 out.go:201] 
	I1210 00:07:55.460254   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:07:55.460350   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:07:55.461990   97943 out.go:177] * Starting "ha-070032-m03" control-plane node in "ha-070032" cluster
	I1210 00:07:55.463162   97943 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:07:55.463187   97943 cache.go:56] Caching tarball of preloaded images
	I1210 00:07:55.463285   97943 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 00:07:55.463296   97943 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 00:07:55.463384   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:07:55.463555   97943 start.go:360] acquireMachinesLock for ha-070032-m03: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:07:55.463598   97943 start.go:364] duration metric: took 23.179µs to acquireMachinesLock for "ha-070032-m03"
	I1210 00:07:55.463615   97943 start.go:93] Provisioning new machine with config: &{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:07:55.463708   97943 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1210 00:07:55.465955   97943 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1210 00:07:55.466061   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:07:55.466099   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:07:55.482132   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
	I1210 00:07:55.482649   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:07:55.483189   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:07:55.483214   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:07:55.483546   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:07:55.483725   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetMachineName
	I1210 00:07:55.483847   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:07:55.483970   97943 start.go:159] libmachine.API.Create for "ha-070032" (driver="kvm2")
	I1210 00:07:55.484001   97943 client.go:168] LocalClient.Create starting
	I1210 00:07:55.484030   97943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem
	I1210 00:07:55.484063   97943 main.go:141] libmachine: Decoding PEM data...
	I1210 00:07:55.484076   97943 main.go:141] libmachine: Parsing certificate...
	I1210 00:07:55.484129   97943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem
	I1210 00:07:55.484150   97943 main.go:141] libmachine: Decoding PEM data...
	I1210 00:07:55.484160   97943 main.go:141] libmachine: Parsing certificate...
	I1210 00:07:55.484177   97943 main.go:141] libmachine: Running pre-create checks...
	I1210 00:07:55.484187   97943 main.go:141] libmachine: (ha-070032-m03) Calling .PreCreateCheck
	I1210 00:07:55.484346   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetConfigRaw
	I1210 00:07:55.484732   97943 main.go:141] libmachine: Creating machine...
	I1210 00:07:55.484749   97943 main.go:141] libmachine: (ha-070032-m03) Calling .Create
	I1210 00:07:55.484892   97943 main.go:141] libmachine: (ha-070032-m03) Creating KVM machine...
	I1210 00:07:55.486009   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found existing default KVM network
	I1210 00:07:55.486135   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found existing private KVM network mk-ha-070032
	I1210 00:07:55.486275   97943 main.go:141] libmachine: (ha-070032-m03) Setting up store path in /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03 ...
	I1210 00:07:55.486315   97943 main.go:141] libmachine: (ha-070032-m03) Building disk image from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1210 00:07:55.486369   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:55.486273   98753 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:07:55.486441   97943 main.go:141] libmachine: (ha-070032-m03) Downloading /home/jenkins/minikube-integration/20062-79135/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1210 00:07:55.750942   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:55.750806   98753 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa...
	I1210 00:07:55.823142   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:55.822993   98753 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/ha-070032-m03.rawdisk...
	I1210 00:07:55.823184   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Writing magic tar header
	I1210 00:07:55.823200   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Writing SSH key tar header
	I1210 00:07:55.823214   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:55.823115   98753 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03 ...
	I1210 00:07:55.823231   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03
	I1210 00:07:55.823252   97943 main.go:141] libmachine: (ha-070032-m03) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03 (perms=drwx------)
	I1210 00:07:55.823278   97943 main.go:141] libmachine: (ha-070032-m03) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines (perms=drwxr-xr-x)
	I1210 00:07:55.823337   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines
	I1210 00:07:55.823363   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:07:55.823375   97943 main.go:141] libmachine: (ha-070032-m03) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube (perms=drwxr-xr-x)
	I1210 00:07:55.823392   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135
	I1210 00:07:55.823405   97943 main.go:141] libmachine: (ha-070032-m03) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135 (perms=drwxrwxr-x)
	I1210 00:07:55.823415   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1210 00:07:55.823431   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home/jenkins
	I1210 00:07:55.823442   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home
	I1210 00:07:55.823456   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Skipping /home - not owner
	I1210 00:07:55.823471   97943 main.go:141] libmachine: (ha-070032-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 00:07:55.823488   97943 main.go:141] libmachine: (ha-070032-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 00:07:55.823501   97943 main.go:141] libmachine: (ha-070032-m03) Creating domain...
	I1210 00:07:55.824547   97943 main.go:141] libmachine: (ha-070032-m03) define libvirt domain using xml: 
	I1210 00:07:55.824562   97943 main.go:141] libmachine: (ha-070032-m03) <domain type='kvm'>
	I1210 00:07:55.824568   97943 main.go:141] libmachine: (ha-070032-m03)   <name>ha-070032-m03</name>
	I1210 00:07:55.824572   97943 main.go:141] libmachine: (ha-070032-m03)   <memory unit='MiB'>2200</memory>
	I1210 00:07:55.824578   97943 main.go:141] libmachine: (ha-070032-m03)   <vcpu>2</vcpu>
	I1210 00:07:55.824582   97943 main.go:141] libmachine: (ha-070032-m03)   <features>
	I1210 00:07:55.824588   97943 main.go:141] libmachine: (ha-070032-m03)     <acpi/>
	I1210 00:07:55.824594   97943 main.go:141] libmachine: (ha-070032-m03)     <apic/>
	I1210 00:07:55.824599   97943 main.go:141] libmachine: (ha-070032-m03)     <pae/>
	I1210 00:07:55.824605   97943 main.go:141] libmachine: (ha-070032-m03)     
	I1210 00:07:55.824615   97943 main.go:141] libmachine: (ha-070032-m03)   </features>
	I1210 00:07:55.824649   97943 main.go:141] libmachine: (ha-070032-m03)   <cpu mode='host-passthrough'>
	I1210 00:07:55.824662   97943 main.go:141] libmachine: (ha-070032-m03)   
	I1210 00:07:55.824670   97943 main.go:141] libmachine: (ha-070032-m03)   </cpu>
	I1210 00:07:55.824678   97943 main.go:141] libmachine: (ha-070032-m03)   <os>
	I1210 00:07:55.824685   97943 main.go:141] libmachine: (ha-070032-m03)     <type>hvm</type>
	I1210 00:07:55.824690   97943 main.go:141] libmachine: (ha-070032-m03)     <boot dev='cdrom'/>
	I1210 00:07:55.824697   97943 main.go:141] libmachine: (ha-070032-m03)     <boot dev='hd'/>
	I1210 00:07:55.824703   97943 main.go:141] libmachine: (ha-070032-m03)     <bootmenu enable='no'/>
	I1210 00:07:55.824709   97943 main.go:141] libmachine: (ha-070032-m03)   </os>
	I1210 00:07:55.824714   97943 main.go:141] libmachine: (ha-070032-m03)   <devices>
	I1210 00:07:55.824720   97943 main.go:141] libmachine: (ha-070032-m03)     <disk type='file' device='cdrom'>
	I1210 00:07:55.824728   97943 main.go:141] libmachine: (ha-070032-m03)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/boot2docker.iso'/>
	I1210 00:07:55.824735   97943 main.go:141] libmachine: (ha-070032-m03)       <target dev='hdc' bus='scsi'/>
	I1210 00:07:55.824740   97943 main.go:141] libmachine: (ha-070032-m03)       <readonly/>
	I1210 00:07:55.824746   97943 main.go:141] libmachine: (ha-070032-m03)     </disk>
	I1210 00:07:55.824753   97943 main.go:141] libmachine: (ha-070032-m03)     <disk type='file' device='disk'>
	I1210 00:07:55.824761   97943 main.go:141] libmachine: (ha-070032-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1210 00:07:55.824769   97943 main.go:141] libmachine: (ha-070032-m03)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/ha-070032-m03.rawdisk'/>
	I1210 00:07:55.824776   97943 main.go:141] libmachine: (ha-070032-m03)       <target dev='hda' bus='virtio'/>
	I1210 00:07:55.824780   97943 main.go:141] libmachine: (ha-070032-m03)     </disk>
	I1210 00:07:55.824787   97943 main.go:141] libmachine: (ha-070032-m03)     <interface type='network'>
	I1210 00:07:55.824793   97943 main.go:141] libmachine: (ha-070032-m03)       <source network='mk-ha-070032'/>
	I1210 00:07:55.824799   97943 main.go:141] libmachine: (ha-070032-m03)       <model type='virtio'/>
	I1210 00:07:55.824804   97943 main.go:141] libmachine: (ha-070032-m03)     </interface>
	I1210 00:07:55.824809   97943 main.go:141] libmachine: (ha-070032-m03)     <interface type='network'>
	I1210 00:07:55.824814   97943 main.go:141] libmachine: (ha-070032-m03)       <source network='default'/>
	I1210 00:07:55.824819   97943 main.go:141] libmachine: (ha-070032-m03)       <model type='virtio'/>
	I1210 00:07:55.824824   97943 main.go:141] libmachine: (ha-070032-m03)     </interface>
	I1210 00:07:55.824830   97943 main.go:141] libmachine: (ha-070032-m03)     <serial type='pty'>
	I1210 00:07:55.824835   97943 main.go:141] libmachine: (ha-070032-m03)       <target port='0'/>
	I1210 00:07:55.824842   97943 main.go:141] libmachine: (ha-070032-m03)     </serial>
	I1210 00:07:55.824846   97943 main.go:141] libmachine: (ha-070032-m03)     <console type='pty'>
	I1210 00:07:55.824852   97943 main.go:141] libmachine: (ha-070032-m03)       <target type='serial' port='0'/>
	I1210 00:07:55.824859   97943 main.go:141] libmachine: (ha-070032-m03)     </console>
	I1210 00:07:55.824863   97943 main.go:141] libmachine: (ha-070032-m03)     <rng model='virtio'>
	I1210 00:07:55.824871   97943 main.go:141] libmachine: (ha-070032-m03)       <backend model='random'>/dev/random</backend>
	I1210 00:07:55.824874   97943 main.go:141] libmachine: (ha-070032-m03)     </rng>
	I1210 00:07:55.824881   97943 main.go:141] libmachine: (ha-070032-m03)     
	I1210 00:07:55.824884   97943 main.go:141] libmachine: (ha-070032-m03)     
	I1210 00:07:55.824891   97943 main.go:141] libmachine: (ha-070032-m03)   </devices>
	I1210 00:07:55.824895   97943 main.go:141] libmachine: (ha-070032-m03) </domain>
	I1210 00:07:55.824901   97943 main.go:141] libmachine: (ha-070032-m03) 
	I1210 00:07:55.831443   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:5a:d9:d9 in network default
	I1210 00:07:55.832042   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:55.832057   97943 main.go:141] libmachine: (ha-070032-m03) Ensuring networks are active...
	I1210 00:07:55.832934   97943 main.go:141] libmachine: (ha-070032-m03) Ensuring network default is active
	I1210 00:07:55.833292   97943 main.go:141] libmachine: (ha-070032-m03) Ensuring network mk-ha-070032 is active
	I1210 00:07:55.833793   97943 main.go:141] libmachine: (ha-070032-m03) Getting domain xml...
	I1210 00:07:55.834538   97943 main.go:141] libmachine: (ha-070032-m03) Creating domain...
	I1210 00:07:57.048312   97943 main.go:141] libmachine: (ha-070032-m03) Waiting to get IP...
	I1210 00:07:57.049343   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:57.049867   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:57.049936   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:57.049857   98753 retry.go:31] will retry after 285.89703ms: waiting for machine to come up
	I1210 00:07:57.337509   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:57.337895   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:57.337921   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:57.337875   98753 retry.go:31] will retry after 339.218188ms: waiting for machine to come up
	I1210 00:07:57.678323   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:57.678856   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:57.678881   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:57.678806   98753 retry.go:31] will retry after 294.170833ms: waiting for machine to come up
	I1210 00:07:57.974134   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:57.974660   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:57.974681   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:57.974611   98753 retry.go:31] will retry after 408.745882ms: waiting for machine to come up
	I1210 00:07:58.385123   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:58.385636   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:58.385664   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:58.385591   98753 retry.go:31] will retry after 527.821664ms: waiting for machine to come up
	I1210 00:07:58.915568   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:58.916006   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:58.916035   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:58.915961   98753 retry.go:31] will retry after 925.585874ms: waiting for machine to come up
	I1210 00:07:59.843180   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:59.843652   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:59.843679   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:59.843610   98753 retry.go:31] will retry after 870.720245ms: waiting for machine to come up
	I1210 00:08:00.715984   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:00.716446   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:00.716472   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:00.716425   98753 retry.go:31] will retry after 1.331743311s: waiting for machine to come up
	I1210 00:08:02.049640   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:02.050041   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:02.050067   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:02.049985   98753 retry.go:31] will retry after 1.76199987s: waiting for machine to come up
	I1210 00:08:03.813933   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:03.814414   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:03.814439   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:03.814370   98753 retry.go:31] will retry after 1.980303699s: waiting for machine to come up
	I1210 00:08:05.796494   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:05.797056   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:05.797086   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:05.797021   98753 retry.go:31] will retry after 2.086128516s: waiting for machine to come up
	I1210 00:08:07.884316   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:07.884692   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:07.884721   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:07.884642   98753 retry.go:31] will retry after 2.780301455s: waiting for machine to come up
	I1210 00:08:10.666546   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:10.666965   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:10.666996   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:10.666924   98753 retry.go:31] will retry after 4.142573793s: waiting for machine to come up
	I1210 00:08:14.811574   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:14.811965   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:14.811989   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:14.811918   98753 retry.go:31] will retry after 5.321214881s: waiting for machine to come up
	I1210 00:08:20.135607   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.136014   97943 main.go:141] libmachine: (ha-070032-m03) Found IP for machine: 192.168.39.244
	I1210 00:08:20.136038   97943 main.go:141] libmachine: (ha-070032-m03) Reserving static IP address...
	I1210 00:08:20.136048   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has current primary IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.136451   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find host DHCP lease matching {name: "ha-070032-m03", mac: "52:54:00:36:e7:81", ip: "192.168.39.244"} in network mk-ha-070032
	I1210 00:08:20.209941   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Getting to WaitForSSH function...
	I1210 00:08:20.209976   97943 main.go:141] libmachine: (ha-070032-m03) Reserved static IP address: 192.168.39.244
	I1210 00:08:20.209989   97943 main.go:141] libmachine: (ha-070032-m03) Waiting for SSH to be available...
	I1210 00:08:20.212879   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.213267   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:minikube Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.213298   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.213460   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Using SSH client type: external
	I1210 00:08:20.213487   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa (-rw-------)
	I1210 00:08:20.213527   97943 main.go:141] libmachine: (ha-070032-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:08:20.213547   97943 main.go:141] libmachine: (ha-070032-m03) DBG | About to run SSH command:
	I1210 00:08:20.213584   97943 main.go:141] libmachine: (ha-070032-m03) DBG | exit 0
	I1210 00:08:20.342480   97943 main.go:141] libmachine: (ha-070032-m03) DBG | SSH cmd err, output: <nil>: 
	I1210 00:08:20.342791   97943 main.go:141] libmachine: (ha-070032-m03) KVM machine creation complete!
	I1210 00:08:20.343090   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetConfigRaw
	I1210 00:08:20.343678   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:20.343881   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:20.344092   97943 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1210 00:08:20.344125   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetState
	I1210 00:08:20.345413   97943 main.go:141] libmachine: Detecting operating system of created instance...
	I1210 00:08:20.345430   97943 main.go:141] libmachine: Waiting for SSH to be available...
	I1210 00:08:20.345437   97943 main.go:141] libmachine: Getting to WaitForSSH function...
	I1210 00:08:20.345450   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:20.347967   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.348355   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.348389   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.348481   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:20.348653   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.348776   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.348911   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:20.349041   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:08:20.349329   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1210 00:08:20.349348   97943 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1210 00:08:20.449562   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:08:20.449588   97943 main.go:141] libmachine: Detecting the provisioner...
	I1210 00:08:20.449598   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:20.452398   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.452785   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.452812   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.452941   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:20.453110   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.453240   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.453428   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:20.453598   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:08:20.453780   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1210 00:08:20.453798   97943 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1210 00:08:20.555272   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1210 00:08:20.555337   97943 main.go:141] libmachine: found compatible host: buildroot
	I1210 00:08:20.555348   97943 main.go:141] libmachine: Provisioning with buildroot...
	I1210 00:08:20.555362   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetMachineName
	I1210 00:08:20.555624   97943 buildroot.go:166] provisioning hostname "ha-070032-m03"
	I1210 00:08:20.555652   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetMachineName
	I1210 00:08:20.555844   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:20.558784   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.559157   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.559192   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.559357   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:20.559555   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.559716   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.559850   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:20.560050   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:08:20.560266   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1210 00:08:20.560285   97943 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-070032-m03 && echo "ha-070032-m03" | sudo tee /etc/hostname
	I1210 00:08:20.676771   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-070032-m03
	
	I1210 00:08:20.676807   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:20.679443   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.679776   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.679807   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.680006   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:20.680185   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.680359   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.680491   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:20.680620   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:08:20.680832   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1210 00:08:20.680847   97943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-070032-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-070032-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-070032-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 00:08:20.791291   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:08:20.791325   97943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 00:08:20.791341   97943 buildroot.go:174] setting up certificates
	I1210 00:08:20.791358   97943 provision.go:84] configureAuth start
	I1210 00:08:20.791370   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetMachineName
	I1210 00:08:20.791652   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetIP
	I1210 00:08:20.794419   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.794874   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.794902   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.795002   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:20.798177   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.798590   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.798619   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.798789   97943 provision.go:143] copyHostCerts
	I1210 00:08:20.798825   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:08:20.798862   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 00:08:20.798871   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:08:20.798934   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 00:08:20.799007   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:08:20.799025   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 00:08:20.799030   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:08:20.799053   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 00:08:20.799097   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:08:20.799112   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 00:08:20.799119   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:08:20.799140   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 00:08:20.799198   97943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.ha-070032-m03 san=[127.0.0.1 192.168.39.244 ha-070032-m03 localhost minikube]
	I1210 00:08:20.901770   97943 provision.go:177] copyRemoteCerts
	I1210 00:08:20.901829   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 00:08:20.901857   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:20.904479   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.904810   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.904842   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.904999   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:20.905202   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.905341   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:20.905465   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa Username:docker}
	I1210 00:08:20.987981   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 00:08:20.988061   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 00:08:21.011122   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 00:08:21.011186   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 00:08:21.033692   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 00:08:21.033754   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 00:08:21.056597   97943 provision.go:87] duration metric: took 265.223032ms to configureAuth
	I1210 00:08:21.056629   97943 buildroot.go:189] setting minikube options for container-runtime
	I1210 00:08:21.057591   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:08:21.057673   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:21.060831   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.061343   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.061378   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.061673   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:21.061904   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.062107   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.062269   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:21.062474   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:08:21.062700   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1210 00:08:21.062721   97943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 00:08:21.281273   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 00:08:21.281301   97943 main.go:141] libmachine: Checking connection to Docker...
	I1210 00:08:21.281310   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetURL
	I1210 00:08:21.282833   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Using libvirt version 6000000
	I1210 00:08:21.285219   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.285581   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.285613   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.285747   97943 main.go:141] libmachine: Docker is up and running!
	I1210 00:08:21.285761   97943 main.go:141] libmachine: Reticulating splines...
	I1210 00:08:21.285769   97943 client.go:171] duration metric: took 25.801757929s to LocalClient.Create
	I1210 00:08:21.285791   97943 start.go:167] duration metric: took 25.801831678s to libmachine.API.Create "ha-070032"
	I1210 00:08:21.285798   97943 start.go:293] postStartSetup for "ha-070032-m03" (driver="kvm2")
	I1210 00:08:21.285807   97943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 00:08:21.285828   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:21.286085   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 00:08:21.286117   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:21.288055   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.288329   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.288370   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.288480   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:21.288647   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.288777   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:21.288901   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa Username:docker}
	I1210 00:08:21.369391   97943 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 00:08:21.373285   97943 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 00:08:21.373310   97943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 00:08:21.373392   97943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 00:08:21.373503   97943 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 00:08:21.373518   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /etc/ssl/certs/862962.pem
	I1210 00:08:21.373639   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 00:08:21.382298   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:08:21.403806   97943 start.go:296] duration metric: took 117.996202ms for postStartSetup
	I1210 00:08:21.403863   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetConfigRaw
	I1210 00:08:21.404476   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetIP
	I1210 00:08:21.407162   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.407495   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.407517   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.407796   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:08:21.408029   97943 start.go:128] duration metric: took 25.944309943s to createHost
	I1210 00:08:21.408053   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:21.410158   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.410458   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.410486   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.410661   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:21.410839   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.411023   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.411142   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:21.411301   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:08:21.411462   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1210 00:08:21.411473   97943 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 00:08:21.514926   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733789301.493981402
	
	I1210 00:08:21.514949   97943 fix.go:216] guest clock: 1733789301.493981402
	I1210 00:08:21.514956   97943 fix.go:229] Guest: 2024-12-10 00:08:21.493981402 +0000 UTC Remote: 2024-12-10 00:08:21.408042688 +0000 UTC m=+148.654123328 (delta=85.938714ms)
	I1210 00:08:21.514972   97943 fix.go:200] guest clock delta is within tolerance: 85.938714ms
	I1210 00:08:21.514978   97943 start.go:83] releasing machines lock for "ha-070032-m03", held for 26.05137115s
	I1210 00:08:21.514997   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:21.515241   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetIP
	I1210 00:08:21.517912   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.518241   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.518261   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.520470   97943 out.go:177] * Found network options:
	I1210 00:08:21.521800   97943 out.go:177]   - NO_PROXY=192.168.39.187,192.168.39.198
	W1210 00:08:21.523143   97943 proxy.go:119] fail to check proxy env: Error ip not in block
	W1210 00:08:21.523168   97943 proxy.go:119] fail to check proxy env: Error ip not in block
	I1210 00:08:21.523188   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:21.523682   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:21.523924   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:21.524029   97943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 00:08:21.524084   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	W1210 00:08:21.524110   97943 proxy.go:119] fail to check proxy env: Error ip not in block
	W1210 00:08:21.524137   97943 proxy.go:119] fail to check proxy env: Error ip not in block
	I1210 00:08:21.524228   97943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 00:08:21.524251   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:21.527134   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.527403   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.527435   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.527461   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.527644   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:21.527864   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.527884   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.527885   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.528014   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:21.528094   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:21.528182   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.528256   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa Username:docker}
	I1210 00:08:21.528295   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:21.528396   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa Username:docker}
	I1210 00:08:21.759543   97943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 00:08:21.765842   97943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 00:08:21.765945   97943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 00:08:21.781497   97943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 00:08:21.781528   97943 start.go:495] detecting cgroup driver to use...
	I1210 00:08:21.781601   97943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 00:08:21.798260   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 00:08:21.812631   97943 docker.go:217] disabling cri-docker service (if available) ...
	I1210 00:08:21.812703   97943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 00:08:21.826291   97943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 00:08:21.839819   97943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 00:08:21.970011   97943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 00:08:22.106825   97943 docker.go:233] disabling docker service ...
	I1210 00:08:22.106898   97943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 00:08:22.120845   97943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 00:08:22.133078   97943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 00:08:22.277754   97943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 00:08:22.396135   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 00:08:22.410691   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 00:08:22.428016   97943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 00:08:22.428081   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.437432   97943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 00:08:22.437492   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.446807   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.457081   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.466785   97943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 00:08:22.476232   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.485876   97943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.501168   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.511414   97943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 00:08:22.520354   97943 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 00:08:22.520415   97943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 00:08:22.532412   97943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 00:08:22.541467   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:08:22.650142   97943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 00:08:22.739814   97943 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 00:08:22.739908   97943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 00:08:22.744756   97943 start.go:563] Will wait 60s for crictl version
	I1210 00:08:22.744820   97943 ssh_runner.go:195] Run: which crictl
	I1210 00:08:22.748420   97943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:08:22.786505   97943 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 00:08:22.786627   97943 ssh_runner.go:195] Run: crio --version
	I1210 00:08:22.812591   97943 ssh_runner.go:195] Run: crio --version
	I1210 00:08:22.840186   97943 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 00:08:22.841668   97943 out.go:177]   - env NO_PROXY=192.168.39.187
	I1210 00:08:22.842917   97943 out.go:177]   - env NO_PROXY=192.168.39.187,192.168.39.198
	I1210 00:08:22.843965   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetIP
	I1210 00:08:22.846623   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:22.847074   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:22.847104   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:22.847299   97943 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 00:08:22.851246   97943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:08:22.863976   97943 mustload.go:65] Loading cluster: ha-070032
	I1210 00:08:22.864213   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:08:22.864497   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:08:22.864537   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:08:22.879688   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33619
	I1210 00:08:22.880163   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:08:22.880674   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:08:22.880695   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:08:22.880999   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:08:22.881201   97943 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:08:22.882501   97943 host.go:66] Checking if "ha-070032" exists ...
	I1210 00:08:22.882829   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:08:22.882872   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:08:22.897175   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
	I1210 00:08:22.897634   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:08:22.898146   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:08:22.898164   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:08:22.898482   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:08:22.898668   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:08:22.898817   97943 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032 for IP: 192.168.39.244
	I1210 00:08:22.898832   97943 certs.go:194] generating shared ca certs ...
	I1210 00:08:22.898852   97943 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:08:22.899000   97943 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 00:08:22.899051   97943 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 00:08:22.899064   97943 certs.go:256] generating profile certs ...
	I1210 00:08:22.899170   97943 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key
	I1210 00:08:22.899201   97943 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.293befb8
	I1210 00:08:22.899223   97943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.293befb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.187 192.168.39.198 192.168.39.244 192.168.39.254]
	I1210 00:08:23.092450   97943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.293befb8 ...
	I1210 00:08:23.092478   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.293befb8: {Name:mk366065b18659314ca3f0bba1448963daaf0a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:08:23.092639   97943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.293befb8 ...
	I1210 00:08:23.092651   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.293befb8: {Name:mk5fa66078dcf45a83918146be6cef89d508f259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:08:23.092720   97943 certs.go:381] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.293befb8 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt
	I1210 00:08:23.092839   97943 certs.go:385] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.293befb8 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key
	I1210 00:08:23.092959   97943 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key
	I1210 00:08:23.092977   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 00:08:23.092992   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 00:08:23.093006   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 00:08:23.093017   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 00:08:23.093029   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 00:08:23.093041   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 00:08:23.093053   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 00:08:23.106669   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 00:08:23.106767   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 00:08:23.106812   97943 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 00:08:23.106826   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:08:23.106858   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 00:08:23.106887   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:08:23.106916   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 00:08:23.107014   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:08:23.107059   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:08:23.107078   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem -> /usr/share/ca-certificates/86296.pem
	I1210 00:08:23.107095   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /usr/share/ca-certificates/862962.pem
	I1210 00:08:23.107140   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:08:23.110428   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:08:23.110865   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:08:23.110897   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:08:23.111098   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:08:23.111299   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:08:23.111497   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:08:23.111654   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:08:23.182834   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1210 00:08:23.187460   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1210 00:08:23.201682   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1210 00:08:23.206212   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1210 00:08:23.216977   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1210 00:08:23.221040   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1210 00:08:23.231771   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1210 00:08:23.235936   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1210 00:08:23.245237   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1210 00:08:23.249225   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1210 00:08:23.259163   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1210 00:08:23.262970   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1210 00:08:23.272905   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:08:23.296036   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 00:08:23.319479   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:08:23.343697   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:08:23.365055   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1210 00:08:23.386745   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 00:08:23.408376   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:08:23.431761   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:08:23.453442   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:08:23.474461   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 00:08:23.496103   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 00:08:23.518047   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1210 00:08:23.533023   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1210 00:08:23.547698   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1210 00:08:23.563066   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1210 00:08:23.577579   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1210 00:08:23.592182   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1210 00:08:23.608125   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1210 00:08:23.627416   97943 ssh_runner.go:195] Run: openssl version
	I1210 00:08:23.632821   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:08:23.642458   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:08:23.646845   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:08:23.646909   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:08:23.652298   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:08:23.662442   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 00:08:23.672292   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 00:08:23.676158   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 00:08:23.676205   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 00:08:23.681586   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 00:08:23.691472   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 00:08:23.701487   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 00:08:23.705375   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 00:08:23.705413   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 00:08:23.710443   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:08:23.720294   97943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:08:23.723799   97943 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 00:08:23.723848   97943 kubeadm.go:934] updating node {m03 192.168.39.244 8443 v1.31.2 crio true true} ...
	I1210 00:08:23.723926   97943 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-070032-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:08:23.723949   97943 kube-vip.go:115] generating kube-vip config ...
	I1210 00:08:23.723977   97943 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1210 00:08:23.738685   97943 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1210 00:08:23.738750   97943 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1210 00:08:23.738796   97943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:08:23.747698   97943 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1210 00:08:23.747755   97943 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1210 00:08:23.756795   97943 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1210 00:08:23.756827   97943 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1210 00:08:23.756846   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:08:23.756856   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1210 00:08:23.756795   97943 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1210 00:08:23.756914   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1210 00:08:23.756945   97943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1210 00:08:23.756968   97943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1210 00:08:23.773755   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1210 00:08:23.773816   97943 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1210 00:08:23.773823   97943 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1210 00:08:23.773844   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1210 00:08:23.773877   97943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1210 00:08:23.773844   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1210 00:08:23.793177   97943 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1210 00:08:23.793213   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1210 00:08:24.557518   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1210 00:08:24.566776   97943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1210 00:08:24.582142   97943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:08:24.597144   97943 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1210 00:08:24.611549   97943 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1210 00:08:24.615055   97943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:08:24.625780   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:08:24.763770   97943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:08:24.783613   97943 host.go:66] Checking if "ha-070032" exists ...
	I1210 00:08:24.784058   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:08:24.784117   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:08:24.799970   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40537
	I1210 00:08:24.800574   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:08:24.801077   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:08:24.801104   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:08:24.801443   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:08:24.801614   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:08:24.801763   97943 start.go:317] joinCluster: &{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:08:24.801913   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1210 00:08:24.801952   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:08:24.804893   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:08:24.805288   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:08:24.805318   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:08:24.805470   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:08:24.805660   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:08:24.805792   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:08:24.805938   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:08:24.954369   97943 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:08:24.954415   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7o473f.weadhysgevqpchg6 --discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-070032-m03 --control-plane --apiserver-advertise-address=192.168.39.244 --apiserver-bind-port=8443"
	I1210 00:08:45.926879   97943 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7o473f.weadhysgevqpchg6 --discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-070032-m03 --control-plane --apiserver-advertise-address=192.168.39.244 --apiserver-bind-port=8443": (20.972431626s)
	I1210 00:08:45.926930   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1210 00:08:46.537890   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-070032-m03 minikube.k8s.io/updated_at=2024_12_10T00_08_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=ha-070032 minikube.k8s.io/primary=false
	I1210 00:08:46.678755   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-070032-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1210 00:08:46.787657   97943 start.go:319] duration metric: took 21.985888121s to joinCluster
	I1210 00:08:46.787759   97943 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:08:46.788166   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:08:46.789343   97943 out.go:177] * Verifying Kubernetes components...
	I1210 00:08:46.790511   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:08:47.024805   97943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:08:47.076330   97943 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:08:47.076598   97943 kapi.go:59] client config for ha-070032: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.crt", KeyFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key", CAFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1210 00:08:47.076672   97943 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.187:8443
	I1210 00:08:47.076938   97943 node_ready.go:35] waiting up to 6m0s for node "ha-070032-m03" to be "Ready" ...
	I1210 00:08:47.077046   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:47.077058   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:47.077068   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:47.077072   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:47.081152   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:08:47.577919   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:47.577942   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:47.577950   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:47.577954   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:47.581367   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:48.077920   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:48.077946   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:48.077954   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:48.077957   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:48.081478   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:48.578106   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:48.578131   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:48.578140   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:48.578145   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:48.581394   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:49.077995   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:49.078020   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:49.078028   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:49.078032   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:49.081191   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:49.081654   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:08:49.577520   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:49.577543   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:49.577568   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:49.577572   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:49.580973   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:50.077456   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:50.077483   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:50.077492   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:50.077497   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:50.083402   97943 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1210 00:08:50.577976   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:50.577999   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:50.578007   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:50.578010   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:50.580506   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:08:51.077330   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:51.077376   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:51.077386   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:51.077395   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:51.080649   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:51.577290   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:51.577326   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:51.577339   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:51.577349   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:51.580882   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:51.581750   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:08:52.077653   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:52.077675   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:52.077683   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:52.077687   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:52.080889   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:52.578159   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:52.578187   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:52.578198   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:52.578206   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:52.582757   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:08:53.078153   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:53.078177   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:53.078185   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:53.078189   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:53.081439   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:53.577299   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:53.577324   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:53.577333   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:53.577338   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:53.580510   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:54.077196   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:54.077220   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:54.077230   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:54.077236   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:54.083654   97943 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1210 00:08:54.084273   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:08:54.578076   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:54.578111   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:54.578119   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:54.578123   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:54.581723   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:55.077626   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:55.077648   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:55.077657   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:55.077660   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:55.081300   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:55.577841   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:55.577867   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:55.577877   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:55.577886   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:55.581081   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:56.078005   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:56.078027   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:56.078036   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:56.078039   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:56.081200   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:56.577743   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:56.577839   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:56.577862   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:56.577877   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:56.582190   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:08:56.583066   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:08:57.077440   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:57.077464   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:57.077472   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:57.077477   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:57.080605   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:57.577457   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:57.577484   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:57.577493   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:57.577503   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:57.580830   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:58.077293   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:58.077331   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:58.077344   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:58.077352   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:58.080511   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:58.577256   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:58.577282   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:58.577294   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:58.577299   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:58.580528   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:59.077895   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:59.077918   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:59.077926   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:59.077932   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:59.080996   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:59.081515   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:08:59.577418   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:59.577442   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:59.577450   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:59.577454   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:59.580861   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:00.077126   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:00.077149   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:00.077160   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:00.077166   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:00.080369   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:00.577334   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:00.577358   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:00.577369   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:00.577376   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:00.580424   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:01.077338   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:01.077364   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:01.077371   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:01.077375   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:01.080475   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:01.577333   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:01.577358   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:01.577371   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:01.577378   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:01.581002   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:01.581675   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:09:02.078158   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:02.078188   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:02.078197   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:02.078202   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:02.081520   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:02.577513   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:02.577534   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:02.577542   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:02.577548   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:02.580750   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:03.077225   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:03.077249   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:03.077258   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:03.077262   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:03.080188   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:03.577192   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:03.577225   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:03.577233   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:03.577238   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:03.579962   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:04.078167   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:04.078198   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:04.078207   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:04.078211   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:04.081272   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:04.081781   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:09:04.577794   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:04.577818   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:04.577826   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:04.577833   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:04.580810   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.077153   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:05.077175   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.077183   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.077189   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.080235   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:05.577566   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:05.577589   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.577597   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.577601   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.580616   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.581339   97943 node_ready.go:49] node "ha-070032-m03" has status "Ready":"True"
	I1210 00:09:05.581357   97943 node_ready.go:38] duration metric: took 18.504395192s for node "ha-070032-m03" to be "Ready" ...
	I1210 00:09:05.581372   97943 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:09:05.581447   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:09:05.581458   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.581465   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.581469   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.589597   97943 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1210 00:09:05.596462   97943 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fs6l6" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.596536   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs6l6
	I1210 00:09:05.596544   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.596551   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.596556   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.599226   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.599844   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:05.599860   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.599867   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.599871   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.602025   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.602633   97943 pod_ready.go:93] pod "coredns-7c65d6cfc9-fs6l6" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:05.602657   97943 pod_ready.go:82] duration metric: took 6.171823ms for pod "coredns-7c65d6cfc9-fs6l6" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.602669   97943 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nqnhw" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.602734   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nqnhw
	I1210 00:09:05.602745   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.602755   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.602759   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.605440   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.606129   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:05.606147   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.606157   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.606166   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.608461   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.608910   97943 pod_ready.go:93] pod "coredns-7c65d6cfc9-nqnhw" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:05.608928   97943 pod_ready.go:82] duration metric: took 6.250217ms for pod "coredns-7c65d6cfc9-nqnhw" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.608941   97943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.608999   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/etcd-ha-070032
	I1210 00:09:05.609009   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.609019   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.609029   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.611004   97943 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1210 00:09:05.611561   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:05.611577   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.611587   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.611591   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.613769   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.614248   97943 pod_ready.go:93] pod "etcd-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:05.614265   97943 pod_ready.go:82] duration metric: took 5.312355ms for pod "etcd-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.614275   97943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.614330   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/etcd-ha-070032-m02
	I1210 00:09:05.614341   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.614352   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.614362   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.616534   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.617151   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:05.617169   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.617188   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.617196   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.619058   97943 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1210 00:09:05.619439   97943 pod_ready.go:93] pod "etcd-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:05.619455   97943 pod_ready.go:82] duration metric: took 5.173011ms for pod "etcd-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.619463   97943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.777761   97943 request.go:632] Waited for 158.225465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/etcd-ha-070032-m03
	I1210 00:09:05.777859   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/etcd-ha-070032-m03
	I1210 00:09:05.777871   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.777881   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.777892   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.780968   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:05.978102   97943 request.go:632] Waited for 196.392006ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:05.978169   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:05.978176   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.978187   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.978209   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.981545   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:05.981978   97943 pod_ready.go:93] pod "etcd-ha-070032-m03" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:05.981997   97943 pod_ready.go:82] duration metric: took 362.528097ms for pod "etcd-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.982014   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:06.178303   97943 request.go:632] Waited for 196.186487ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032
	I1210 00:09:06.178366   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032
	I1210 00:09:06.178371   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:06.178384   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:06.178391   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:06.181153   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:06.378297   97943 request.go:632] Waited for 196.356871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:06.378357   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:06.378363   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:06.378371   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:06.378375   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:06.381593   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:06.382165   97943 pod_ready.go:93] pod "kube-apiserver-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:06.382184   97943 pod_ready.go:82] duration metric: took 400.160632ms for pod "kube-apiserver-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:06.382194   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:06.578291   97943 request.go:632] Waited for 195.993966ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032-m02
	I1210 00:09:06.578353   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032-m02
	I1210 00:09:06.578358   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:06.578366   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:06.578370   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:06.582418   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:09:06.777593   97943 request.go:632] Waited for 194.199077ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:06.777669   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:06.777674   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:06.777681   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:06.777686   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:06.780997   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:06.781681   97943 pod_ready.go:93] pod "kube-apiserver-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:06.781703   97943 pod_ready.go:82] duration metric: took 399.498231ms for pod "kube-apiserver-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:06.781713   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:06.977670   97943 request.go:632] Waited for 195.882184ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032-m03
	I1210 00:09:06.977738   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032-m03
	I1210 00:09:06.977758   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:06.977770   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:06.977778   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:06.981052   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:07.178250   97943 request.go:632] Waited for 196.370885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:07.178313   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:07.178319   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:07.178329   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:07.178338   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:07.182730   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:09:07.183284   97943 pod_ready.go:93] pod "kube-apiserver-ha-070032-m03" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:07.183306   97943 pod_ready.go:82] duration metric: took 401.586259ms for pod "kube-apiserver-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:07.183318   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:07.378237   97943 request.go:632] Waited for 194.824127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032
	I1210 00:09:07.378316   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032
	I1210 00:09:07.378322   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:07.378330   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:07.378333   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:07.382039   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:07.578085   97943 request.go:632] Waited for 195.402263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:07.578148   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:07.578154   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:07.578162   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:07.578166   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:07.581490   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:07.582147   97943 pod_ready.go:93] pod "kube-controller-manager-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:07.582169   97943 pod_ready.go:82] duration metric: took 398.840074ms for pod "kube-controller-manager-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:07.582184   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:07.778287   97943 request.go:632] Waited for 195.989005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032-m02
	I1210 00:09:07.778362   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032-m02
	I1210 00:09:07.778374   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:07.778386   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:07.778396   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:07.781669   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:07.978394   97943 request.go:632] Waited for 195.912192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:07.978479   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:07.978484   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:07.978492   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:07.978496   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:07.981759   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:07.982200   97943 pod_ready.go:93] pod "kube-controller-manager-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:07.982218   97943 pod_ready.go:82] duration metric: took 400.02698ms for pod "kube-controller-manager-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:07.982230   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:08.178354   97943 request.go:632] Waited for 196.04264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032-m03
	I1210 00:09:08.178439   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032-m03
	I1210 00:09:08.178449   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:08.178457   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:08.178466   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:08.181631   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:08.378597   97943 request.go:632] Waited for 196.366344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:08.378673   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:08.378683   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:08.378697   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:08.378707   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:08.384450   97943 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1210 00:09:08.385049   97943 pod_ready.go:93] pod "kube-controller-manager-ha-070032-m03" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:08.385078   97943 pod_ready.go:82] duration metric: took 402.840862ms for pod "kube-controller-manager-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:08.385096   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7fm88" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:08.577999   97943 request.go:632] Waited for 192.799851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7fm88
	I1210 00:09:08.578083   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7fm88
	I1210 00:09:08.578091   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:08.578100   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:08.578112   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:08.581292   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:08.777999   97943 request.go:632] Waited for 196.009017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:08.778080   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:08.778085   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:08.778093   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:08.778098   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:08.781007   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:08.781565   97943 pod_ready.go:93] pod "kube-proxy-7fm88" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:08.781586   97943 pod_ready.go:82] duration metric: took 396.482834ms for pod "kube-proxy-7fm88" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:08.781597   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bhnsm" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:08.978485   97943 request.go:632] Waited for 196.79193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bhnsm
	I1210 00:09:08.978550   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bhnsm
	I1210 00:09:08.978555   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:08.978577   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:08.978584   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:08.981555   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:09.178372   97943 request.go:632] Waited for 196.176512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:09.178445   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:09.178450   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:09.178457   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:09.178462   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:09.180718   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:09.181230   97943 pod_ready.go:93] pod "kube-proxy-bhnsm" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:09.181253   97943 pod_ready.go:82] duration metric: took 399.648229ms for pod "kube-proxy-bhnsm" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:09.181267   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xsxdp" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:09.378388   97943 request.go:632] Waited for 197.025674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsxdp
	I1210 00:09:09.378477   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsxdp
	I1210 00:09:09.378488   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:09.378497   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:09.378503   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:09.381425   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:09.578360   97943 request.go:632] Waited for 196.219183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:09.578421   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:09.578427   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:09.578435   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:09.578443   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:09.581280   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:09.581905   97943 pod_ready.go:93] pod "kube-proxy-xsxdp" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:09.581924   97943 pod_ready.go:82] duration metric: took 400.650321ms for pod "kube-proxy-xsxdp" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:09.581937   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:09.778061   97943 request.go:632] Waited for 196.052401ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032
	I1210 00:09:09.778128   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032
	I1210 00:09:09.778147   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:09.778155   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:09.778159   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:09.781448   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:09.978364   97943 request.go:632] Waited for 196.322768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:09.978428   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:09.978432   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:09.978441   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:09.978451   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:09.981730   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:09.982286   97943 pod_ready.go:93] pod "kube-scheduler-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:09.982308   97943 pod_ready.go:82] duration metric: took 400.362948ms for pod "kube-scheduler-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:09.982322   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:10.178076   97943 request.go:632] Waited for 195.65251ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032-m02
	I1210 00:09:10.178166   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032-m02
	I1210 00:09:10.178177   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:10.178190   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:10.178199   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:10.180876   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:10.377670   97943 request.go:632] Waited for 196.175118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:10.377736   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:10.377741   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:10.377751   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:10.377756   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:10.380801   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:10.381686   97943 pod_ready.go:93] pod "kube-scheduler-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:10.381707   97943 pod_ready.go:82] duration metric: took 399.375185ms for pod "kube-scheduler-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:10.381723   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:10.578151   97943 request.go:632] Waited for 196.332176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032-m03
	I1210 00:09:10.578230   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032-m03
	I1210 00:09:10.578239   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:10.578251   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:10.578259   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:10.581336   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:10.778384   97943 request.go:632] Waited for 196.388806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:10.778498   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:10.778512   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:10.778524   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:10.778534   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:10.781555   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:10.782190   97943 pod_ready.go:93] pod "kube-scheduler-ha-070032-m03" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:10.782213   97943 pod_ready.go:82] duration metric: took 400.482867ms for pod "kube-scheduler-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:10.782226   97943 pod_ready.go:39] duration metric: took 5.200841149s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:09:10.782243   97943 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:09:10.782306   97943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:09:10.798221   97943 api_server.go:72] duration metric: took 24.010410964s to wait for apiserver process to appear ...
	I1210 00:09:10.798252   97943 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:09:10.798277   97943 api_server.go:253] Checking apiserver healthz at https://192.168.39.187:8443/healthz ...
	I1210 00:09:10.802683   97943 api_server.go:279] https://192.168.39.187:8443/healthz returned 200:
	ok
	I1210 00:09:10.802763   97943 round_trippers.go:463] GET https://192.168.39.187:8443/version
	I1210 00:09:10.802775   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:10.802786   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:10.802791   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:10.803637   97943 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1210 00:09:10.803715   97943 api_server.go:141] control plane version: v1.31.2
	I1210 00:09:10.803733   97943 api_server.go:131] duration metric: took 5.473282ms to wait for apiserver health ...
	I1210 00:09:10.803747   97943 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:09:10.978074   97943 request.go:632] Waited for 174.240033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:09:10.978174   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:09:10.978188   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:10.978200   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:10.978210   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:10.984458   97943 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1210 00:09:10.990989   97943 system_pods.go:59] 24 kube-system pods found
	I1210 00:09:10.991013   97943 system_pods.go:61] "coredns-7c65d6cfc9-fs6l6" [9a1cf8b4-0a76-41a1-930f-1574e31db324] Running
	I1210 00:09:10.991018   97943 system_pods.go:61] "coredns-7c65d6cfc9-nqnhw" [2c81e85b-ea31-43b6-9467-97ff40b0b4a0] Running
	I1210 00:09:10.991022   97943 system_pods.go:61] "etcd-ha-070032" [db08368f-b4a3-4d4b-8863-3a2bef1f832b] Running
	I1210 00:09:10.991026   97943 system_pods.go:61] "etcd-ha-070032-m02" [22d5e0f8-0395-40fe-b734-1718a89c251a] Running
	I1210 00:09:10.991029   97943 system_pods.go:61] "etcd-ha-070032-m03" [ab936be4-5488-4dfc-a02a-d503eaf3ea02] Running
	I1210 00:09:10.991032   97943 system_pods.go:61] "kindnet-69btk" [23838518-3372-48bc-986e-e4688e0963bb] Running
	I1210 00:09:10.991034   97943 system_pods.go:61] "kindnet-gbrrg" [fe384e2f-f251-49d1-9b90-e73cddcd45e1] Running
	I1210 00:09:10.991037   97943 system_pods.go:61] "kindnet-r97q9" [566672d9-989b-4337-84f4-bafd5c70755f] Running
	I1210 00:09:10.991041   97943 system_pods.go:61] "kube-apiserver-ha-070032" [e06e8916-31d6-4690-bc97-ed55126af827] Running
	I1210 00:09:10.991044   97943 system_pods.go:61] "kube-apiserver-ha-070032-m02" [b2c543a7-2d54-44a4-9369-3115629d79bb] Running
	I1210 00:09:10.991047   97943 system_pods.go:61] "kube-apiserver-ha-070032-m03" [7d78ed28-bd45-49a7-bdd8-85d011048605] Running
	I1210 00:09:10.991050   97943 system_pods.go:61] "kube-controller-manager-ha-070032" [d01a145f-816f-41ce-83db-c81713564109] Running
	I1210 00:09:10.991054   97943 system_pods.go:61] "kube-controller-manager-ha-070032-m02" [d8af77ba-8496-4383-a8a8-387110a2a026] Running
	I1210 00:09:10.991057   97943 system_pods.go:61] "kube-controller-manager-ha-070032-m03" [f9860096-95b3-4911-b95f-22a2080afd02] Running
	I1210 00:09:10.991060   97943 system_pods.go:61] "kube-proxy-7fm88" [e935cde6-5a4b-4387-93a9-26ca701c54ac] Running
	I1210 00:09:10.991064   97943 system_pods.go:61] "kube-proxy-bhnsm" [b886bbdb-e0b7-4cb8-8e71-4b9d23993178] Running
	I1210 00:09:10.991068   97943 system_pods.go:61] "kube-proxy-xsxdp" [9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5] Running
	I1210 00:09:10.991074   97943 system_pods.go:61] "kube-scheduler-ha-070032" [28986558-f062-4f8e-9fb1-413a22311d7d] Running
	I1210 00:09:10.991078   97943 system_pods.go:61] "kube-scheduler-ha-070032-m02" [0600e010-0e55-44d9-ac1f-1e62c0139dd1] Running
	I1210 00:09:10.991081   97943 system_pods.go:61] "kube-scheduler-ha-070032-m03" [3b8eede7-a587-4561-9d46-ca58b43d7ebe] Running
	I1210 00:09:10.991084   97943 system_pods.go:61] "kube-vip-ha-070032" [dcecf230-f635-42a1-8fd4-567e3409d086] Running
	I1210 00:09:10.991087   97943 system_pods.go:61] "kube-vip-ha-070032-m02" [af656c29-7151-4ef7-8fa6-91187358671e] Running
	I1210 00:09:10.991090   97943 system_pods.go:61] "kube-vip-ha-070032-m03" [db7c389f-4b41-4fee-a43d-e89ef1455a1d] Running
	I1210 00:09:10.991095   97943 system_pods.go:61] "storage-provisioner" [920be345-6e4a-425f-bcde-0e5463fd92b7] Running
	I1210 00:09:10.991101   97943 system_pods.go:74] duration metric: took 187.346055ms to wait for pod list to return data ...
	I1210 00:09:10.991110   97943 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:09:11.178582   97943 request.go:632] Waited for 187.368121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/default/serviceaccounts
	I1210 00:09:11.178661   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/default/serviceaccounts
	I1210 00:09:11.178670   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:11.178681   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:11.178692   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:11.181792   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:11.181919   97943 default_sa.go:45] found service account: "default"
	I1210 00:09:11.181932   97943 default_sa.go:55] duration metric: took 190.816109ms for default service account to be created ...
	I1210 00:09:11.181940   97943 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:09:11.378264   97943 request.go:632] Waited for 196.227358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:09:11.378336   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:09:11.378344   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:11.378355   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:11.378365   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:11.383056   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:09:11.390160   97943 system_pods.go:86] 24 kube-system pods found
	I1210 00:09:11.390190   97943 system_pods.go:89] "coredns-7c65d6cfc9-fs6l6" [9a1cf8b4-0a76-41a1-930f-1574e31db324] Running
	I1210 00:09:11.390197   97943 system_pods.go:89] "coredns-7c65d6cfc9-nqnhw" [2c81e85b-ea31-43b6-9467-97ff40b0b4a0] Running
	I1210 00:09:11.390201   97943 system_pods.go:89] "etcd-ha-070032" [db08368f-b4a3-4d4b-8863-3a2bef1f832b] Running
	I1210 00:09:11.390207   97943 system_pods.go:89] "etcd-ha-070032-m02" [22d5e0f8-0395-40fe-b734-1718a89c251a] Running
	I1210 00:09:11.390211   97943 system_pods.go:89] "etcd-ha-070032-m03" [ab936be4-5488-4dfc-a02a-d503eaf3ea02] Running
	I1210 00:09:11.390215   97943 system_pods.go:89] "kindnet-69btk" [23838518-3372-48bc-986e-e4688e0963bb] Running
	I1210 00:09:11.390219   97943 system_pods.go:89] "kindnet-gbrrg" [fe384e2f-f251-49d1-9b90-e73cddcd45e1] Running
	I1210 00:09:11.390223   97943 system_pods.go:89] "kindnet-r97q9" [566672d9-989b-4337-84f4-bafd5c70755f] Running
	I1210 00:09:11.390227   97943 system_pods.go:89] "kube-apiserver-ha-070032" [e06e8916-31d6-4690-bc97-ed55126af827] Running
	I1210 00:09:11.390231   97943 system_pods.go:89] "kube-apiserver-ha-070032-m02" [b2c543a7-2d54-44a4-9369-3115629d79bb] Running
	I1210 00:09:11.390238   97943 system_pods.go:89] "kube-apiserver-ha-070032-m03" [7d78ed28-bd45-49a7-bdd8-85d011048605] Running
	I1210 00:09:11.390243   97943 system_pods.go:89] "kube-controller-manager-ha-070032" [d01a145f-816f-41ce-83db-c81713564109] Running
	I1210 00:09:11.390247   97943 system_pods.go:89] "kube-controller-manager-ha-070032-m02" [d8af77ba-8496-4383-a8a8-387110a2a026] Running
	I1210 00:09:11.390251   97943 system_pods.go:89] "kube-controller-manager-ha-070032-m03" [f9860096-95b3-4911-b95f-22a2080afd02] Running
	I1210 00:09:11.390256   97943 system_pods.go:89] "kube-proxy-7fm88" [e935cde6-5a4b-4387-93a9-26ca701c54ac] Running
	I1210 00:09:11.390259   97943 system_pods.go:89] "kube-proxy-bhnsm" [b886bbdb-e0b7-4cb8-8e71-4b9d23993178] Running
	I1210 00:09:11.390263   97943 system_pods.go:89] "kube-proxy-xsxdp" [9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5] Running
	I1210 00:09:11.390266   97943 system_pods.go:89] "kube-scheduler-ha-070032" [28986558-f062-4f8e-9fb1-413a22311d7d] Running
	I1210 00:09:11.390273   97943 system_pods.go:89] "kube-scheduler-ha-070032-m02" [0600e010-0e55-44d9-ac1f-1e62c0139dd1] Running
	I1210 00:09:11.390276   97943 system_pods.go:89] "kube-scheduler-ha-070032-m03" [3b8eede7-a587-4561-9d46-ca58b43d7ebe] Running
	I1210 00:09:11.390280   97943 system_pods.go:89] "kube-vip-ha-070032" [dcecf230-f635-42a1-8fd4-567e3409d086] Running
	I1210 00:09:11.390284   97943 system_pods.go:89] "kube-vip-ha-070032-m02" [af656c29-7151-4ef7-8fa6-91187358671e] Running
	I1210 00:09:11.390287   97943 system_pods.go:89] "kube-vip-ha-070032-m03" [db7c389f-4b41-4fee-a43d-e89ef1455a1d] Running
	I1210 00:09:11.390290   97943 system_pods.go:89] "storage-provisioner" [920be345-6e4a-425f-bcde-0e5463fd92b7] Running
	I1210 00:09:11.390298   97943 system_pods.go:126] duration metric: took 208.352897ms to wait for k8s-apps to be running ...
	I1210 00:09:11.390309   97943 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:09:11.390362   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:09:11.405439   97943 system_svc.go:56] duration metric: took 15.123283ms WaitForService to wait for kubelet
	I1210 00:09:11.405468   97943 kubeadm.go:582] duration metric: took 24.617672778s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:09:11.405491   97943 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:09:11.577957   97943 request.go:632] Waited for 172.358102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes
	I1210 00:09:11.578045   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes
	I1210 00:09:11.578061   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:11.578081   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:11.578091   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:11.582050   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:11.583133   97943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:09:11.583157   97943 node_conditions.go:123] node cpu capacity is 2
	I1210 00:09:11.583185   97943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:09:11.583189   97943 node_conditions.go:123] node cpu capacity is 2
	I1210 00:09:11.583193   97943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:09:11.583196   97943 node_conditions.go:123] node cpu capacity is 2
	I1210 00:09:11.583201   97943 node_conditions.go:105] duration metric: took 177.705427ms to run NodePressure ...
	I1210 00:09:11.583218   97943 start.go:241] waiting for startup goroutines ...
	I1210 00:09:11.583239   97943 start.go:255] writing updated cluster config ...
	I1210 00:09:11.583593   97943 ssh_runner.go:195] Run: rm -f paused
	I1210 00:09:11.635827   97943 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:09:11.638609   97943 out.go:177] * Done! kubectl is now configured to use "ha-070032" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 00:12:48 ha-070032 crio[662]: time="2024-12-10 00:12:48.942023718Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789568941999248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12ccc488-90e4-4964-a03d-b45d084b3c78 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:12:48 ha-070032 crio[662]: time="2024-12-10 00:12:48.944854390Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e476891-f72c-454a-afda-f47300356357 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:48 ha-070032 crio[662]: time="2024-12-10 00:12:48.944917959Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e476891-f72c-454a-afda-f47300356357 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:48 ha-070032 crio[662]: time="2024-12-10 00:12:48.945181455Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c6ab8dccd8ba997291d2b80d419b2b2e05a32373d0202dd8c719263ae30ccdb,PodSandboxId:e3f274c30a3959296a5c030d1ffa934b64c75f95bf0306039097cb7cf68b4fe4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733789355190103686,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d682h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 856c7b29-d84b-4688-8af1-c6cd60e5c948,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e305236942a6a79ef7f91e253be393cd58488d48aba3b5bf66c479acd0067bc8,PodSandboxId:5a85b4a79da52f1c29e7fcfa81fa0bedc1eb6bfd38fb444179c78f575d79d860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215377276438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqnhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e85b-ea31-43b6-9467-97ff40b0b4a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c2e334f3ec55e4be646775958650ae4186637afeca4998288d1fbd38037c8ea,PodSandboxId:f558795052a9d75e6b78b43096de277d42834212f92a6b90269650e35759d438,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215322355166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fs6l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9a1cf8b4-0a76-41a1-930f-1574e31db324,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bc6f0cc193d54e7a8a7dd22a38ca0e4d4f61bb51f322d3f41dac47db13c95b,PodSandboxId:3ad98b3ae6d227d0d652ff69ea4b3d54dd4f3e946d6480b75ce103fe7eaffa18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733789215263534966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920be345-6e4a-425f-bcde-0e5463fd92b7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c87cad753cfce5b05fd5987342e13dea86a36bc44f9c4b3a934dd48d2329af3,PodSandboxId:07cf68f38d235286c1e4b79576c7b2d5fd1fb3fd1b16b9f3f13cfe64eaed0c17,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733789203352008941,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r97q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566672d9-989b-4337-84f4-bafd5c70755f,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ce0ccc8b2285ae9861ca675ddee1c7cc4b2eb95f1fc9b3c252e8b7f70e57e2,PodSandboxId:f6e164f7d5dc23cc23d172647390aa33a976f115b0e620a3f1c8ceb1972b50c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733789199
811986108,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsxdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c832ea7354c3849fb453286649b272a5fcf355d47187efa465c13d6fc4d65dd,PodSandboxId:63415c4eed5c67ae421bad643372f730f2058ddc450abd01192ea02adf7fd1cd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378919154
1880372,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853eed8c10d2af858abe4fbec7a3d503,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ad93591d94d418f536035e0dbd58787d7e5c96ef2645619b7bf1fdd88df33c,PodSandboxId:974a006af9e0d5faa5a19f159a45e227208fee0ebf388fba700952547d1d2529,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733789188777226881,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecd4fc4633d25364aa5747e7113ed8c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cc792ca2c2098e4d3b71b355fb33ce358f1d7de4b2454c236712b6102bdaa06,PodSandboxId:94eb5ad94038f1074001a9da0f0356b4211451a99d8498e75f83f6896f01e753,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789188760652300,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2215b8ddb73d6ba5d32a52569092583d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1482c9caeda45e0518bea419e7de0b9d7ea7016563bc8769f4f33151afc52fca,PodSandboxId:2ae901f42d38831fd48a04c073ff0005559a868da016d574361a8a116dae1424,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733789188774438801,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b3080b6aff5134adaf044003845e8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06c286b00c118c3e50e8da5c902bb87ed0d31b055437ffb3e10bda025f3a64d,PodSandboxId:baf6b5fc008a937d1c54f73e03b660e8eddaec5406534b7f3a1ae0650baf4121,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733789188707109904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727141bf4ae4b442d005a5b0b8b6fdb9,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e476891-f72c-454a-afda-f47300356357 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:48 ha-070032 crio[662]: time="2024-12-10 00:12:48.981382486Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2748c4ff-4223-4b3d-80e7-f6183ceb071a name=/runtime.v1.RuntimeService/Version
	Dec 10 00:12:48 ha-070032 crio[662]: time="2024-12-10 00:12:48.981450510Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2748c4ff-4223-4b3d-80e7-f6183ceb071a name=/runtime.v1.RuntimeService/Version
	Dec 10 00:12:48 ha-070032 crio[662]: time="2024-12-10 00:12:48.982521304Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0d136cbe-2433-4df7-b1da-29d2e62a97a2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:12:48 ha-070032 crio[662]: time="2024-12-10 00:12:48.983180701Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789568983157048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d136cbe-2433-4df7-b1da-29d2e62a97a2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:12:48 ha-070032 crio[662]: time="2024-12-10 00:12:48.983891875Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94f4ff2e-3641-4a68-a454-6d28b24bc726 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:48 ha-070032 crio[662]: time="2024-12-10 00:12:48.984012856Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94f4ff2e-3641-4a68-a454-6d28b24bc726 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:48 ha-070032 crio[662]: time="2024-12-10 00:12:48.984219058Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c6ab8dccd8ba997291d2b80d419b2b2e05a32373d0202dd8c719263ae30ccdb,PodSandboxId:e3f274c30a3959296a5c030d1ffa934b64c75f95bf0306039097cb7cf68b4fe4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733789355190103686,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d682h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 856c7b29-d84b-4688-8af1-c6cd60e5c948,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e305236942a6a79ef7f91e253be393cd58488d48aba3b5bf66c479acd0067bc8,PodSandboxId:5a85b4a79da52f1c29e7fcfa81fa0bedc1eb6bfd38fb444179c78f575d79d860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215377276438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqnhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e85b-ea31-43b6-9467-97ff40b0b4a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c2e334f3ec55e4be646775958650ae4186637afeca4998288d1fbd38037c8ea,PodSandboxId:f558795052a9d75e6b78b43096de277d42834212f92a6b90269650e35759d438,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215322355166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fs6l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9a1cf8b4-0a76-41a1-930f-1574e31db324,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bc6f0cc193d54e7a8a7dd22a38ca0e4d4f61bb51f322d3f41dac47db13c95b,PodSandboxId:3ad98b3ae6d227d0d652ff69ea4b3d54dd4f3e946d6480b75ce103fe7eaffa18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733789215263534966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920be345-6e4a-425f-bcde-0e5463fd92b7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c87cad753cfce5b05fd5987342e13dea86a36bc44f9c4b3a934dd48d2329af3,PodSandboxId:07cf68f38d235286c1e4b79576c7b2d5fd1fb3fd1b16b9f3f13cfe64eaed0c17,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733789203352008941,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r97q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566672d9-989b-4337-84f4-bafd5c70755f,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ce0ccc8b2285ae9861ca675ddee1c7cc4b2eb95f1fc9b3c252e8b7f70e57e2,PodSandboxId:f6e164f7d5dc23cc23d172647390aa33a976f115b0e620a3f1c8ceb1972b50c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733789199
811986108,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsxdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c832ea7354c3849fb453286649b272a5fcf355d47187efa465c13d6fc4d65dd,PodSandboxId:63415c4eed5c67ae421bad643372f730f2058ddc450abd01192ea02adf7fd1cd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378919154
1880372,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853eed8c10d2af858abe4fbec7a3d503,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ad93591d94d418f536035e0dbd58787d7e5c96ef2645619b7bf1fdd88df33c,PodSandboxId:974a006af9e0d5faa5a19f159a45e227208fee0ebf388fba700952547d1d2529,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733789188777226881,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecd4fc4633d25364aa5747e7113ed8c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cc792ca2c2098e4d3b71b355fb33ce358f1d7de4b2454c236712b6102bdaa06,PodSandboxId:94eb5ad94038f1074001a9da0f0356b4211451a99d8498e75f83f6896f01e753,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789188760652300,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2215b8ddb73d6ba5d32a52569092583d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1482c9caeda45e0518bea419e7de0b9d7ea7016563bc8769f4f33151afc52fca,PodSandboxId:2ae901f42d38831fd48a04c073ff0005559a868da016d574361a8a116dae1424,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733789188774438801,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b3080b6aff5134adaf044003845e8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06c286b00c118c3e50e8da5c902bb87ed0d31b055437ffb3e10bda025f3a64d,PodSandboxId:baf6b5fc008a937d1c54f73e03b660e8eddaec5406534b7f3a1ae0650baf4121,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733789188707109904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727141bf4ae4b442d005a5b0b8b6fdb9,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94f4ff2e-3641-4a68-a454-6d28b24bc726 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:49 ha-070032 crio[662]: time="2024-12-10 00:12:49.021128178Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3727e844-d01e-4e6d-8a4f-f3e10d8577ed name=/runtime.v1.RuntimeService/Version
	Dec 10 00:12:49 ha-070032 crio[662]: time="2024-12-10 00:12:49.021212984Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3727e844-d01e-4e6d-8a4f-f3e10d8577ed name=/runtime.v1.RuntimeService/Version
	Dec 10 00:12:49 ha-070032 crio[662]: time="2024-12-10 00:12:49.022622252Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=38d805a6-8b74-4734-a5e2-950c79c81b2b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:12:49 ha-070032 crio[662]: time="2024-12-10 00:12:49.023211927Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789569023186739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=38d805a6-8b74-4734-a5e2-950c79c81b2b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:12:49 ha-070032 crio[662]: time="2024-12-10 00:12:49.024031678Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e70333a-9484-4efb-a015-868f20dd2ad0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:49 ha-070032 crio[662]: time="2024-12-10 00:12:49.024094803Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e70333a-9484-4efb-a015-868f20dd2ad0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:49 ha-070032 crio[662]: time="2024-12-10 00:12:49.024322761Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c6ab8dccd8ba997291d2b80d419b2b2e05a32373d0202dd8c719263ae30ccdb,PodSandboxId:e3f274c30a3959296a5c030d1ffa934b64c75f95bf0306039097cb7cf68b4fe4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733789355190103686,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d682h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 856c7b29-d84b-4688-8af1-c6cd60e5c948,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e305236942a6a79ef7f91e253be393cd58488d48aba3b5bf66c479acd0067bc8,PodSandboxId:5a85b4a79da52f1c29e7fcfa81fa0bedc1eb6bfd38fb444179c78f575d79d860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215377276438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqnhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e85b-ea31-43b6-9467-97ff40b0b4a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c2e334f3ec55e4be646775958650ae4186637afeca4998288d1fbd38037c8ea,PodSandboxId:f558795052a9d75e6b78b43096de277d42834212f92a6b90269650e35759d438,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215322355166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fs6l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9a1cf8b4-0a76-41a1-930f-1574e31db324,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bc6f0cc193d54e7a8a7dd22a38ca0e4d4f61bb51f322d3f41dac47db13c95b,PodSandboxId:3ad98b3ae6d227d0d652ff69ea4b3d54dd4f3e946d6480b75ce103fe7eaffa18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733789215263534966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920be345-6e4a-425f-bcde-0e5463fd92b7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c87cad753cfce5b05fd5987342e13dea86a36bc44f9c4b3a934dd48d2329af3,PodSandboxId:07cf68f38d235286c1e4b79576c7b2d5fd1fb3fd1b16b9f3f13cfe64eaed0c17,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733789203352008941,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r97q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566672d9-989b-4337-84f4-bafd5c70755f,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ce0ccc8b2285ae9861ca675ddee1c7cc4b2eb95f1fc9b3c252e8b7f70e57e2,PodSandboxId:f6e164f7d5dc23cc23d172647390aa33a976f115b0e620a3f1c8ceb1972b50c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733789199
811986108,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsxdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c832ea7354c3849fb453286649b272a5fcf355d47187efa465c13d6fc4d65dd,PodSandboxId:63415c4eed5c67ae421bad643372f730f2058ddc450abd01192ea02adf7fd1cd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378919154
1880372,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853eed8c10d2af858abe4fbec7a3d503,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ad93591d94d418f536035e0dbd58787d7e5c96ef2645619b7bf1fdd88df33c,PodSandboxId:974a006af9e0d5faa5a19f159a45e227208fee0ebf388fba700952547d1d2529,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733789188777226881,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecd4fc4633d25364aa5747e7113ed8c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cc792ca2c2098e4d3b71b355fb33ce358f1d7de4b2454c236712b6102bdaa06,PodSandboxId:94eb5ad94038f1074001a9da0f0356b4211451a99d8498e75f83f6896f01e753,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789188760652300,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2215b8ddb73d6ba5d32a52569092583d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1482c9caeda45e0518bea419e7de0b9d7ea7016563bc8769f4f33151afc52fca,PodSandboxId:2ae901f42d38831fd48a04c073ff0005559a868da016d574361a8a116dae1424,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733789188774438801,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b3080b6aff5134adaf044003845e8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06c286b00c118c3e50e8da5c902bb87ed0d31b055437ffb3e10bda025f3a64d,PodSandboxId:baf6b5fc008a937d1c54f73e03b660e8eddaec5406534b7f3a1ae0650baf4121,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733789188707109904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727141bf4ae4b442d005a5b0b8b6fdb9,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e70333a-9484-4efb-a015-868f20dd2ad0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:49 ha-070032 crio[662]: time="2024-12-10 00:12:49.066914863Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f664a862-7572-49b5-9c24-ce1a47aa5996 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:12:49 ha-070032 crio[662]: time="2024-12-10 00:12:49.067024543Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f664a862-7572-49b5-9c24-ce1a47aa5996 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:12:49 ha-070032 crio[662]: time="2024-12-10 00:12:49.069295830Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d01d70ad-9d84-42e0-9c54-8701cf103721 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:12:49 ha-070032 crio[662]: time="2024-12-10 00:12:49.070796946Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789569070769215,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d01d70ad-9d84-42e0-9c54-8701cf103721 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:12:49 ha-070032 crio[662]: time="2024-12-10 00:12:49.071373109Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3528c058-2f28-4e6c-b1bd-7cc76bc823c5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:49 ha-070032 crio[662]: time="2024-12-10 00:12:49.071437213Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3528c058-2f28-4e6c-b1bd-7cc76bc823c5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:49 ha-070032 crio[662]: time="2024-12-10 00:12:49.071684431Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c6ab8dccd8ba997291d2b80d419b2b2e05a32373d0202dd8c719263ae30ccdb,PodSandboxId:e3f274c30a3959296a5c030d1ffa934b64c75f95bf0306039097cb7cf68b4fe4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733789355190103686,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d682h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 856c7b29-d84b-4688-8af1-c6cd60e5c948,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e305236942a6a79ef7f91e253be393cd58488d48aba3b5bf66c479acd0067bc8,PodSandboxId:5a85b4a79da52f1c29e7fcfa81fa0bedc1eb6bfd38fb444179c78f575d79d860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215377276438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqnhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e85b-ea31-43b6-9467-97ff40b0b4a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c2e334f3ec55e4be646775958650ae4186637afeca4998288d1fbd38037c8ea,PodSandboxId:f558795052a9d75e6b78b43096de277d42834212f92a6b90269650e35759d438,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215322355166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fs6l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9a1cf8b4-0a76-41a1-930f-1574e31db324,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bc6f0cc193d54e7a8a7dd22a38ca0e4d4f61bb51f322d3f41dac47db13c95b,PodSandboxId:3ad98b3ae6d227d0d652ff69ea4b3d54dd4f3e946d6480b75ce103fe7eaffa18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733789215263534966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920be345-6e4a-425f-bcde-0e5463fd92b7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c87cad753cfce5b05fd5987342e13dea86a36bc44f9c4b3a934dd48d2329af3,PodSandboxId:07cf68f38d235286c1e4b79576c7b2d5fd1fb3fd1b16b9f3f13cfe64eaed0c17,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733789203352008941,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r97q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566672d9-989b-4337-84f4-bafd5c70755f,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ce0ccc8b2285ae9861ca675ddee1c7cc4b2eb95f1fc9b3c252e8b7f70e57e2,PodSandboxId:f6e164f7d5dc23cc23d172647390aa33a976f115b0e620a3f1c8ceb1972b50c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733789199
811986108,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsxdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c832ea7354c3849fb453286649b272a5fcf355d47187efa465c13d6fc4d65dd,PodSandboxId:63415c4eed5c67ae421bad643372f730f2058ddc450abd01192ea02adf7fd1cd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378919154
1880372,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853eed8c10d2af858abe4fbec7a3d503,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ad93591d94d418f536035e0dbd58787d7e5c96ef2645619b7bf1fdd88df33c,PodSandboxId:974a006af9e0d5faa5a19f159a45e227208fee0ebf388fba700952547d1d2529,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733789188777226881,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecd4fc4633d25364aa5747e7113ed8c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cc792ca2c2098e4d3b71b355fb33ce358f1d7de4b2454c236712b6102bdaa06,PodSandboxId:94eb5ad94038f1074001a9da0f0356b4211451a99d8498e75f83f6896f01e753,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789188760652300,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2215b8ddb73d6ba5d32a52569092583d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1482c9caeda45e0518bea419e7de0b9d7ea7016563bc8769f4f33151afc52fca,PodSandboxId:2ae901f42d38831fd48a04c073ff0005559a868da016d574361a8a116dae1424,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733789188774438801,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b3080b6aff5134adaf044003845e8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06c286b00c118c3e50e8da5c902bb87ed0d31b055437ffb3e10bda025f3a64d,PodSandboxId:baf6b5fc008a937d1c54f73e03b660e8eddaec5406534b7f3a1ae0650baf4121,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733789188707109904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727141bf4ae4b442d005a5b0b8b6fdb9,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3528c058-2f28-4e6c-b1bd-7cc76bc823c5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1c6ab8dccd8ba       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   e3f274c30a395       busybox-7dff88458-d682h
	e305236942a6a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   5a85b4a79da52       coredns-7c65d6cfc9-nqnhw
	7c2e334f3ec55       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   f558795052a9d       coredns-7c65d6cfc9-fs6l6
	a0bc6f0cc193d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   3ad98b3ae6d22       storage-provisioner
	4c87cad753cfc       docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3    6 minutes ago       Running             kindnet-cni               0                   07cf68f38d235       kindnet-r97q9
	d7ce0ccc8b228       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   f6e164f7d5dc2       kube-proxy-xsxdp
	2c832ea7354c3       ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517     6 minutes ago       Running             kube-vip                  0                   63415c4eed5c6       kube-vip-ha-070032
	a1ad93591d94d       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   974a006af9e0d       kube-apiserver-ha-070032
	1482c9caeda45       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   2ae901f42d388       kube-scheduler-ha-070032
	3cc792ca2c209       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   94eb5ad94038f       etcd-ha-070032
	d06c286b00c11       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   baf6b5fc008a9       kube-controller-manager-ha-070032
	
	
	==> coredns [7c2e334f3ec55e4be646775958650ae4186637afeca4998288d1fbd38037c8ea] <==
	[INFO] 10.244.3.2:46682 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001449431s
	[INFO] 10.244.1.2:58178 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186321s
	[INFO] 10.244.1.2:50380 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000193258s
	[INFO] 10.244.1.2:46652 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001618s
	[INFO] 10.244.1.2:57883 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003426883s
	[INFO] 10.244.0.4:59352 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009624s
	[INFO] 10.244.0.4:54543 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069497s
	[INFO] 10.244.0.4:53696 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011622s
	[INFO] 10.244.0.4:55436 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112389s
	[INFO] 10.244.3.2:43114 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001706864s
	[INFO] 10.244.3.2:56624 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088751s
	[INFO] 10.244.3.2:44513 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000074851s
	[INFO] 10.244.3.2:49956 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081755s
	[INFO] 10.244.1.2:40349 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000153721s
	[INFO] 10.244.0.4:44925 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128981s
	[INFO] 10.244.0.4:36252 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088006s
	[INFO] 10.244.0.4:39383 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070489s
	[INFO] 10.244.0.4:51627 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125996s
	[INFO] 10.244.3.2:46896 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118479s
	[INFO] 10.244.1.2:38261 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013128s
	[INFO] 10.244.1.2:58062 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000196774s
	[INFO] 10.244.0.4:47202 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140777s
	[INFO] 10.244.0.4:55399 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000091936s
	[INFO] 10.244.3.2:58172 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126998s
	[INFO] 10.244.3.2:58403 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000107335s
	
	
	==> coredns [e305236942a6a79ef7f91e253be393cd58488d48aba3b5bf66c479acd0067bc8] <==
	[INFO] 10.244.3.2:39118 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.049213372s
	[INFO] 10.244.1.2:47189 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002650171s
	[INFO] 10.244.1.2:60873 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149978s
	[INFO] 10.244.1.2:48109 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137629s
	[INFO] 10.244.1.2:49474 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113792s
	[INFO] 10.244.0.4:41643 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001681013s
	[INFO] 10.244.0.4:48048 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011923s
	[INFO] 10.244.0.4:35726 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000999387s
	[INFO] 10.244.0.4:41981 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00003888s
	[INFO] 10.244.3.2:42883 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156584s
	[INFO] 10.244.3.2:47597 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174459s
	[INFO] 10.244.3.2:52426 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001324612s
	[INFO] 10.244.3.2:51253 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071403s
	[INFO] 10.244.1.2:50492 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118518s
	[INFO] 10.244.1.2:49203 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108258s
	[INFO] 10.244.1.2:51348 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096375s
	[INFO] 10.244.3.2:42362 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000236533s
	[INFO] 10.244.3.2:60373 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010669s
	[INFO] 10.244.3.2:54648 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107013s
	[INFO] 10.244.1.2:49645 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000168571s
	[INFO] 10.244.1.2:37889 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000146602s
	[INFO] 10.244.0.4:44430 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098202s
	[INFO] 10.244.0.4:40310 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093003s
	[INFO] 10.244.3.2:55334 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000110256s
	[INFO] 10.244.3.2:41666 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108876s
	
	
	==> describe nodes <==
	Name:               ha-070032
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-070032
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=ha-070032
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_10T00_06_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:06:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-070032
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:12:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 00:09:38 +0000   Tue, 10 Dec 2024 00:06:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 00:09:38 +0000   Tue, 10 Dec 2024 00:06:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 00:09:38 +0000   Tue, 10 Dec 2024 00:06:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 00:09:38 +0000   Tue, 10 Dec 2024 00:06:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.187
	  Hostname:    ha-070032
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fb099128ff44c2a9726305ea6a63c95
	  System UUID:                8fb09912-8ff4-4c2a-9726-305ea6a63c95
	  Boot ID:                    72ec90c5-f76d-4c2b-9a52-435cb90236ed
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-d682h              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 coredns-7c65d6cfc9-fs6l6             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m10s
	  kube-system                 coredns-7c65d6cfc9-nqnhw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m10s
	  kube-system                 etcd-ha-070032                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m15s
	  kube-system                 kindnet-r97q9                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m10s
	  kube-system                 kube-apiserver-ha-070032             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-controller-manager-ha-070032    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-proxy-xsxdp                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-scheduler-ha-070032             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-vip-ha-070032                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m9s   kube-proxy       
	  Normal  Starting                 6m15s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m15s  kubelet          Node ha-070032 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m15s  kubelet          Node ha-070032 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m15s  kubelet          Node ha-070032 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m11s  node-controller  Node ha-070032 event: Registered Node ha-070032 in Controller
	  Normal  NodeReady                5m55s  kubelet          Node ha-070032 status is now: NodeReady
	  Normal  RegisteredNode           5m11s  node-controller  Node ha-070032 event: Registered Node ha-070032 in Controller
	  Normal  RegisteredNode           3m57s  node-controller  Node ha-070032 event: Registered Node ha-070032 in Controller
	
	
	Name:               ha-070032-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-070032-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=ha-070032
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_10T00_07_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:07:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-070032-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:10:24 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 10 Dec 2024 00:09:33 +0000   Tue, 10 Dec 2024 00:11:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 10 Dec 2024 00:09:33 +0000   Tue, 10 Dec 2024 00:11:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 10 Dec 2024 00:09:33 +0000   Tue, 10 Dec 2024 00:11:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 10 Dec 2024 00:09:33 +0000   Tue, 10 Dec 2024 00:11:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    ha-070032-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2c2b302d819044f8ad0494a0ee312d67
	  System UUID:                2c2b302d-8190-44f8-ad04-94a0ee312d67
	  Boot ID:                    b80c4e1c-4168-43bd-ac70-470e7e9703f3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7gbz8                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 etcd-ha-070032-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m17s
	  kube-system                 kindnet-69btk                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m19s
	  kube-system                 kube-apiserver-ha-070032-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-controller-manager-ha-070032-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-proxy-7fm88                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-scheduler-ha-070032-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-vip-ha-070032-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m14s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     5m19s                  cidrAllocator    Node ha-070032-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  5m19s (x8 over 5m19s)  kubelet          Node ha-070032-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m19s (x8 over 5m19s)  kubelet          Node ha-070032-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m19s (x7 over 5m19s)  kubelet          Node ha-070032-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-070032-m02 event: Registered Node ha-070032-m02 in Controller
	  Normal  RegisteredNode           5m11s                  node-controller  Node ha-070032-m02 event: Registered Node ha-070032-m02 in Controller
	  Normal  RegisteredNode           3m57s                  node-controller  Node ha-070032-m02 event: Registered Node ha-070032-m02 in Controller
	  Normal  NodeNotReady             102s                   node-controller  Node ha-070032-m02 status is now: NodeNotReady
	
	
	Name:               ha-070032-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-070032-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=ha-070032
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_10T00_08_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:08:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-070032-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:12:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 00:09:45 +0000   Tue, 10 Dec 2024 00:08:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 00:09:45 +0000   Tue, 10 Dec 2024 00:08:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 00:09:45 +0000   Tue, 10 Dec 2024 00:08:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 00:09:45 +0000   Tue, 10 Dec 2024 00:09:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.244
	  Hostname:    ha-070032-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7af7f783967c41bab4027928f3eb1ce2
	  System UUID:                7af7f783-967c-41ba-b402-7928f3eb1ce2
	  Boot ID:                    d7bca268-a1b9-47e2-900d-e8e3d560bcf4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-pw24w                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 etcd-ha-070032-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m4s
	  kube-system                 kindnet-gbrrg                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m6s
	  kube-system                 kube-apiserver-ha-070032-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-controller-manager-ha-070032-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-proxy-bhnsm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-scheduler-ha-070032-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-vip-ha-070032-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m1s                 kube-proxy       
	  Normal  CIDRAssignmentFailed     4m6s                 cidrAllocator    Node ha-070032-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m6s (x8 over 4m6s)  kubelet          Node ha-070032-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s (x8 over 4m6s)  kubelet          Node ha-070032-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s (x7 over 4m6s)  kubelet          Node ha-070032-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-070032-m03 event: Registered Node ha-070032-m03 in Controller
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-070032-m03 event: Registered Node ha-070032-m03 in Controller
	  Normal  RegisteredNode           3m57s                node-controller  Node ha-070032-m03 event: Registered Node ha-070032-m03 in Controller
	
	
	Name:               ha-070032-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-070032-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=ha-070032
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_10T00_09_50_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:09:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-070032-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:12:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 00:10:20 +0000   Tue, 10 Dec 2024 00:09:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 00:10:20 +0000   Tue, 10 Dec 2024 00:09:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 00:10:20 +0000   Tue, 10 Dec 2024 00:09:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 00:10:20 +0000   Tue, 10 Dec 2024 00:10:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.178
	  Hostname:    ha-070032-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1722ee99e8fc4ae7bbf0809a3824e471
	  System UUID:                1722ee99-e8fc-4ae7-bbf0-809a3824e471
	  Boot ID:                    4df30219-5a9e-41b4-adfb-6890ccd87aac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-knnxw       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      2m58s
	  kube-system                 kube-proxy-k8xs7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 2m54s            kube-proxy       
	  Normal  CIDRAssignmentFailed     3m               cidrAllocator    Node ha-070032-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m (x2 over 3m)  kubelet          Node ha-070032-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x2 over 3m)  kubelet          Node ha-070032-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x2 over 3m)  kubelet          Node ha-070032-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m57s            node-controller  Node ha-070032-m04 event: Registered Node ha-070032-m04 in Controller
	  Normal  RegisteredNode           2m56s            node-controller  Node ha-070032-m04 event: Registered Node ha-070032-m04 in Controller
	  Normal  RegisteredNode           2m56s            node-controller  Node ha-070032-m04 event: Registered Node ha-070032-m04 in Controller
	  Normal  NodeReady                2m40s            kubelet          Node ha-070032-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec10 00:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052638] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037715] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Dec10 00:06] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.906851] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.611346] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.711169] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.053296] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050206] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.175256] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.129791] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.262857] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +3.716566] systemd-fstab-generator[748]: Ignoring "noauto" option for root device
	[  +4.745437] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.059727] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.033385] systemd-fstab-generator[1302]: Ignoring "noauto" option for root device
	[  +0.073983] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.636013] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.381804] kauditd_printk_skb: 38 callbacks suppressed
	[Dec10 00:07] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [3cc792ca2c2098e4d3b71b355fb33ce358f1d7de4b2454c236712b6102bdaa06] <==
	{"level":"warn","ts":"2024-12-10T00:12:49.112743Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:49.156344Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:49.158029Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:49.213224Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:49.313167Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:49.328167Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:49.337592Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:49.341790Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:49.358468Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:49.367809Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:49.376419Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:49.379783Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:49.408161Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:49.411034Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:49.417927Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:49.418654Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:49.429292Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:49.435388Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:49.438490Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:49.440942Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:49.444319Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:49.452078Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:49.464049Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:49.513295Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:49.526273Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:12:49 up 6 min,  0 users,  load average: 0.25, 0.30, 0.15
	Linux ha-070032 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4c87cad753cfce5b05fd5987342e13dea86a36bc44f9c4b3a934dd48d2329af3] <==
	I1210 00:12:14.361608       1 main.go:324] Node ha-070032-m02 has CIDR [10.244.1.0/24] 
	I1210 00:12:24.366813       1 main.go:297] Handling node with IPs: map[192.168.39.187:{}]
	I1210 00:12:24.366986       1 main.go:301] handling current node
	I1210 00:12:24.367104       1 main.go:297] Handling node with IPs: map[192.168.39.198:{}]
	I1210 00:12:24.367132       1 main.go:324] Node ha-070032-m02 has CIDR [10.244.1.0/24] 
	I1210 00:12:24.367328       1 main.go:297] Handling node with IPs: map[192.168.39.244:{}]
	I1210 00:12:24.367352       1 main.go:324] Node ha-070032-m03 has CIDR [10.244.3.0/24] 
	I1210 00:12:24.367457       1 main.go:297] Handling node with IPs: map[192.168.39.178:{}]
	I1210 00:12:24.367477       1 main.go:324] Node ha-070032-m04 has CIDR [10.244.4.0/24] 
	I1210 00:12:34.364895       1 main.go:297] Handling node with IPs: map[192.168.39.178:{}]
	I1210 00:12:34.364970       1 main.go:324] Node ha-070032-m04 has CIDR [10.244.4.0/24] 
	I1210 00:12:34.365169       1 main.go:297] Handling node with IPs: map[192.168.39.187:{}]
	I1210 00:12:34.365177       1 main.go:301] handling current node
	I1210 00:12:34.365200       1 main.go:297] Handling node with IPs: map[192.168.39.198:{}]
	I1210 00:12:34.365204       1 main.go:324] Node ha-070032-m02 has CIDR [10.244.1.0/24] 
	I1210 00:12:34.365319       1 main.go:297] Handling node with IPs: map[192.168.39.244:{}]
	I1210 00:12:34.365324       1 main.go:324] Node ha-070032-m03 has CIDR [10.244.3.0/24] 
	I1210 00:12:44.361278       1 main.go:297] Handling node with IPs: map[192.168.39.187:{}]
	I1210 00:12:44.361407       1 main.go:301] handling current node
	I1210 00:12:44.361435       1 main.go:297] Handling node with IPs: map[192.168.39.198:{}]
	I1210 00:12:44.361453       1 main.go:324] Node ha-070032-m02 has CIDR [10.244.1.0/24] 
	I1210 00:12:44.361686       1 main.go:297] Handling node with IPs: map[192.168.39.244:{}]
	I1210 00:12:44.361767       1 main.go:324] Node ha-070032-m03 has CIDR [10.244.3.0/24] 
	I1210 00:12:44.361952       1 main.go:297] Handling node with IPs: map[192.168.39.178:{}]
	I1210 00:12:44.361977       1 main.go:324] Node ha-070032-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [a1ad93591d94d418f536035e0dbd58787d7e5c96ef2645619b7bf1fdd88df33c] <==
	W1210 00:06:33.327544       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.187]
	I1210 00:06:33.328436       1 controller.go:615] quota admission added evaluator for: endpoints
	I1210 00:06:33.332351       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 00:06:33.644177       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1210 00:06:34.401030       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1210 00:06:34.426254       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1210 00:06:34.437836       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1210 00:06:39.341658       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1210 00:06:39.388665       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1210 00:09:16.643347       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53112: use of closed network connection
	E1210 00:09:16.826908       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53130: use of closed network connection
	E1210 00:09:17.054445       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53146: use of closed network connection
	E1210 00:09:17.230406       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53174: use of closed network connection
	E1210 00:09:17.395919       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53190: use of closed network connection
	E1210 00:09:17.578908       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53210: use of closed network connection
	E1210 00:09:17.752762       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53234: use of closed network connection
	E1210 00:09:17.924915       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53246: use of closed network connection
	E1210 00:09:18.096320       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53250: use of closed network connection
	E1210 00:09:18.374453       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53288: use of closed network connection
	E1210 00:09:18.551219       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53308: use of closed network connection
	E1210 00:09:18.715487       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53328: use of closed network connection
	E1210 00:09:18.882307       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53350: use of closed network connection
	E1210 00:09:19.053232       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53360: use of closed network connection
	E1210 00:09:19.219127       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53388: use of closed network connection
	W1210 00:10:43.338652       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.187 192.168.39.244]
	
	
	==> kube-controller-manager [d06c286b00c118c3e50e8da5c902bb87ed0d31b055437ffb3e10bda025f3a64d] <==
	I1210 00:09:49.805217       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-070032-m04" podCIDRs=["10.244.4.0/24"]
	I1210 00:09:49.805335       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:49.805501       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:49.830568       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:50.055099       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:50.429393       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:52.233446       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:53.527465       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:53.529595       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-070032-m04"
	I1210 00:09:53.635341       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:53.748163       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:53.769858       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:10:00.115956       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:10:09.020321       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:10:09.021003       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-070032-m04"
	I1210 00:10:09.036523       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:10:12.188838       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:10:20.604295       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:11:07.214303       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-070032-m04"
	I1210 00:11:07.214659       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m02"
	I1210 00:11:07.239149       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m02"
	I1210 00:11:07.332434       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.113905ms"
	I1210 00:11:07.332808       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="177.2µs"
	I1210 00:11:08.619804       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m02"
	I1210 00:11:12.462357       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m02"
	
	
	==> kube-proxy [d7ce0ccc8b2285ae9861ca675ddee1c7cc4b2eb95f1fc9b3c252e8b7f70e57e2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1210 00:06:40.034153       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1210 00:06:40.050742       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.187"]
	E1210 00:06:40.050886       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 00:06:40.097328       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1210 00:06:40.097397       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 00:06:40.097429       1 server_linux.go:169] "Using iptables Proxier"
	I1210 00:06:40.099955       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 00:06:40.100221       1 server.go:483] "Version info" version="v1.31.2"
	I1210 00:06:40.100242       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 00:06:40.102079       1 config.go:199] "Starting service config controller"
	I1210 00:06:40.102108       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1210 00:06:40.102130       1 config.go:105] "Starting endpoint slice config controller"
	I1210 00:06:40.102134       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1210 00:06:40.103442       1 config.go:328] "Starting node config controller"
	I1210 00:06:40.103468       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1210 00:06:40.203097       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1210 00:06:40.203185       1 shared_informer.go:320] Caches are synced for service config
	I1210 00:06:40.203635       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1482c9caeda45e0518bea419e7de0b9d7ea7016563bc8769f4f33151afc52fca] <==
	W1210 00:06:32.612869       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1210 00:06:32.612911       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1210 00:06:32.694127       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1210 00:06:32.694210       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 00:06:32.728214       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1210 00:06:32.728261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1210 00:06:32.890681       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1210 00:06:32.890785       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:06:32.906571       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1210 00:06:32.906947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:06:33.046474       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1210 00:06:33.046616       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1210 00:06:36.200867       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1210 00:09:49.873453       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-r2tf6\": pod kube-proxy-r2tf6 is already assigned to node \"ha-070032-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-r2tf6" node="ha-070032-m04"
	E1210 00:09:49.876571       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-r2tf6\": pod kube-proxy-r2tf6 is already assigned to node \"ha-070032-m04\"" pod="kube-system/kube-proxy-r2tf6"
	I1210 00:09:49.878867       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-r2tf6" node="ha-070032-m04"
	E1210 00:09:49.879144       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-v5wzl\": pod kindnet-v5wzl is already assigned to node \"ha-070032-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-v5wzl" node="ha-070032-m04"
	E1210 00:09:49.879364       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-v5wzl\": pod kindnet-v5wzl is already assigned to node \"ha-070032-m04\"" pod="kube-system/kindnet-v5wzl"
	I1210 00:09:49.879740       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-v5wzl" node="ha-070032-m04"
	E1210 00:09:49.938476       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-j8rtf\": pod kindnet-j8rtf is already assigned to node \"ha-070032-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-j8rtf" node="ha-070032-m04"
	E1210 00:09:49.939506       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-j8rtf\": pod kindnet-j8rtf is already assigned to node \"ha-070032-m04\"" pod="kube-system/kindnet-j8rtf"
	E1210 00:09:51.707755       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-nqxxb\": pod kindnet-nqxxb is already assigned to node \"ha-070032-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-nqxxb" node="ha-070032-m04"
	E1210 00:09:51.707858       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f925375b-3698-422b-a607-5a92ae55da32(kube-system/kindnet-nqxxb) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-nqxxb"
	E1210 00:09:51.707911       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-nqxxb\": pod kindnet-nqxxb is already assigned to node \"ha-070032-m04\"" pod="kube-system/kindnet-nqxxb"
	I1210 00:09:51.707964       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-nqxxb" node="ha-070032-m04"
	
	
	==> kubelet <==
	Dec 10 00:11:34 ha-070032 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 10 00:11:34 ha-070032 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 10 00:11:34 ha-070032 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 10 00:11:34 ha-070032 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 10 00:11:34 ha-070032 kubelet[1308]: E1210 00:11:34.426250    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789494424141935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:34 ha-070032 kubelet[1308]: E1210 00:11:34.426301    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789494424141935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:44 ha-070032 kubelet[1308]: E1210 00:11:44.428969    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789504427653710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:44 ha-070032 kubelet[1308]: E1210 00:11:44.429023    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789504427653710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:54 ha-070032 kubelet[1308]: E1210 00:11:54.430352    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789514430120521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:54 ha-070032 kubelet[1308]: E1210 00:11:54.430374    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789514430120521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:04 ha-070032 kubelet[1308]: E1210 00:12:04.432645    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789524431673132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:04 ha-070032 kubelet[1308]: E1210 00:12:04.432732    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789524431673132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:14 ha-070032 kubelet[1308]: E1210 00:12:14.434466    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789534434193110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:14 ha-070032 kubelet[1308]: E1210 00:12:14.434800    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789534434193110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:24 ha-070032 kubelet[1308]: E1210 00:12:24.436591    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789544436265231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:24 ha-070032 kubelet[1308]: E1210 00:12:24.436615    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789544436265231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:34 ha-070032 kubelet[1308]: E1210 00:12:34.323013    1308 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 10 00:12:34 ha-070032 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 10 00:12:34 ha-070032 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 10 00:12:34 ha-070032 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 10 00:12:34 ha-070032 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 10 00:12:34 ha-070032 kubelet[1308]: E1210 00:12:34.438072    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789554437642598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:34 ha-070032 kubelet[1308]: E1210 00:12:34.438102    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789554437642598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:44 ha-070032 kubelet[1308]: E1210 00:12:44.439455    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789564439127012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:44 ha-070032 kubelet[1308]: E1210 00:12:44.439836    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789564439127012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-070032 -n ha-070032
helpers_test.go:261: (dbg) Run:  kubectl --context ha-070032 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1210 00:12:53.150776   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.377284248s)
ha_test.go:415: expected profile "ha-070032" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-070032\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-070032\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-070032\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.187\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.198\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.244\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.178\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubev
irt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker
\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-070032 -n ha-070032
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-070032 logs -n 25: (1.26800569s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-070032 cp ha-070032-m03:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3617736836/001/cp-test_ha-070032-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m03:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032:/home/docker/cp-test_ha-070032-m03_ha-070032.txt                       |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032 sudo cat                                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m03_ha-070032.txt                                 |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m03:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m02:/home/docker/cp-test_ha-070032-m03_ha-070032-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032-m02 sudo cat                                          | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m03_ha-070032-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m03:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04:/home/docker/cp-test_ha-070032-m03_ha-070032-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032-m04 sudo cat                                          | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m03_ha-070032-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-070032 cp testdata/cp-test.txt                                                | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3617736836/001/cp-test_ha-070032-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032:/home/docker/cp-test_ha-070032-m04_ha-070032.txt                       |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032 sudo cat                                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m04_ha-070032.txt                                 |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m02:/home/docker/cp-test_ha-070032-m04_ha-070032-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032-m02 sudo cat                                          | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m04_ha-070032-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m03:/home/docker/cp-test_ha-070032-m04_ha-070032-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032-m03 sudo cat                                          | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m04_ha-070032-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-070032 node stop m02 -v=7                                                     | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/10 00:05:52
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 00:05:52.791526   97943 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:05:52.791657   97943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:05:52.791669   97943 out.go:358] Setting ErrFile to fd 2...
	I1210 00:05:52.791677   97943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:05:52.791857   97943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 00:05:52.792405   97943 out.go:352] Setting JSON to false
	I1210 00:05:52.793229   97943 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6504,"bootTime":1733782649,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:05:52.793329   97943 start.go:139] virtualization: kvm guest
	I1210 00:05:52.796124   97943 out.go:177] * [ha-070032] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 00:05:52.797192   97943 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 00:05:52.797225   97943 notify.go:220] Checking for updates...
	I1210 00:05:52.799407   97943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:05:52.800504   97943 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:05:52.801675   97943 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:05:52.802744   97943 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:05:52.803783   97943 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:05:52.805109   97943 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:05:52.839813   97943 out.go:177] * Using the kvm2 driver based on user configuration
	I1210 00:05:52.840958   97943 start.go:297] selected driver: kvm2
	I1210 00:05:52.841009   97943 start.go:901] validating driver "kvm2" against <nil>
	I1210 00:05:52.841037   97943 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:05:52.841764   97943 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:05:52.841862   97943 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 00:05:52.856053   97943 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1210 00:05:52.856105   97943 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1210 00:05:52.856343   97943 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:05:52.856388   97943 cni.go:84] Creating CNI manager for ""
	I1210 00:05:52.856439   97943 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1210 00:05:52.856451   97943 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 00:05:52.856513   97943 start.go:340] cluster config:
	{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1210 00:05:52.856629   97943 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:05:52.858290   97943 out.go:177] * Starting "ha-070032" primary control-plane node in "ha-070032" cluster
	I1210 00:05:52.859441   97943 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:05:52.859486   97943 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1210 00:05:52.859496   97943 cache.go:56] Caching tarball of preloaded images
	I1210 00:05:52.859571   97943 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 00:05:52.859584   97943 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 00:05:52.859883   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:05:52.859904   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json: {Name:mke01e2b75d6b946a14cfa49d40b8237b928645a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:05:52.860050   97943 start.go:360] acquireMachinesLock for ha-070032: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:05:52.860091   97943 start.go:364] duration metric: took 24.816µs to acquireMachinesLock for "ha-070032"
	I1210 00:05:52.860115   97943 start.go:93] Provisioning new machine with config: &{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:05:52.860185   97943 start.go:125] createHost starting for "" (driver="kvm2")
	I1210 00:05:52.862431   97943 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1210 00:05:52.862625   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:05:52.862674   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:52.876494   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44133
	I1210 00:05:52.876866   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:52.877406   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:05:52.877428   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:52.877772   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:52.877940   97943 main.go:141] libmachine: (ha-070032) Calling .GetMachineName
	I1210 00:05:52.878106   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:05:52.878243   97943 start.go:159] libmachine.API.Create for "ha-070032" (driver="kvm2")
	I1210 00:05:52.878282   97943 client.go:168] LocalClient.Create starting
	I1210 00:05:52.878351   97943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem
	I1210 00:05:52.878400   97943 main.go:141] libmachine: Decoding PEM data...
	I1210 00:05:52.878419   97943 main.go:141] libmachine: Parsing certificate...
	I1210 00:05:52.878472   97943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem
	I1210 00:05:52.878494   97943 main.go:141] libmachine: Decoding PEM data...
	I1210 00:05:52.878509   97943 main.go:141] libmachine: Parsing certificate...
	I1210 00:05:52.878535   97943 main.go:141] libmachine: Running pre-create checks...
	I1210 00:05:52.878545   97943 main.go:141] libmachine: (ha-070032) Calling .PreCreateCheck
	I1210 00:05:52.878920   97943 main.go:141] libmachine: (ha-070032) Calling .GetConfigRaw
	I1210 00:05:52.879333   97943 main.go:141] libmachine: Creating machine...
	I1210 00:05:52.879348   97943 main.go:141] libmachine: (ha-070032) Calling .Create
	I1210 00:05:52.879474   97943 main.go:141] libmachine: (ha-070032) Creating KVM machine...
	I1210 00:05:52.880541   97943 main.go:141] libmachine: (ha-070032) DBG | found existing default KVM network
	I1210 00:05:52.881177   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:52.881049   97966 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a30}
	I1210 00:05:52.881198   97943 main.go:141] libmachine: (ha-070032) DBG | created network xml: 
	I1210 00:05:52.881212   97943 main.go:141] libmachine: (ha-070032) DBG | <network>
	I1210 00:05:52.881222   97943 main.go:141] libmachine: (ha-070032) DBG |   <name>mk-ha-070032</name>
	I1210 00:05:52.881231   97943 main.go:141] libmachine: (ha-070032) DBG |   <dns enable='no'/>
	I1210 00:05:52.881237   97943 main.go:141] libmachine: (ha-070032) DBG |   
	I1210 00:05:52.881250   97943 main.go:141] libmachine: (ha-070032) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1210 00:05:52.881265   97943 main.go:141] libmachine: (ha-070032) DBG |     <dhcp>
	I1210 00:05:52.881279   97943 main.go:141] libmachine: (ha-070032) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1210 00:05:52.881290   97943 main.go:141] libmachine: (ha-070032) DBG |     </dhcp>
	I1210 00:05:52.881301   97943 main.go:141] libmachine: (ha-070032) DBG |   </ip>
	I1210 00:05:52.881310   97943 main.go:141] libmachine: (ha-070032) DBG |   
	I1210 00:05:52.881318   97943 main.go:141] libmachine: (ha-070032) DBG | </network>
	I1210 00:05:52.881328   97943 main.go:141] libmachine: (ha-070032) DBG | 
	I1210 00:05:52.886258   97943 main.go:141] libmachine: (ha-070032) DBG | trying to create private KVM network mk-ha-070032 192.168.39.0/24...
	I1210 00:05:52.950347   97943 main.go:141] libmachine: (ha-070032) Setting up store path in /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032 ...
	I1210 00:05:52.950384   97943 main.go:141] libmachine: (ha-070032) DBG | private KVM network mk-ha-070032 192.168.39.0/24 created
	I1210 00:05:52.950396   97943 main.go:141] libmachine: (ha-070032) Building disk image from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1210 00:05:52.950439   97943 main.go:141] libmachine: (ha-070032) Downloading /home/jenkins/minikube-integration/20062-79135/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1210 00:05:52.950463   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:52.950265   97966 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:05:53.225909   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:53.225784   97966 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa...
	I1210 00:05:53.325235   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:53.325112   97966 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/ha-070032.rawdisk...
	I1210 00:05:53.325266   97943 main.go:141] libmachine: (ha-070032) DBG | Writing magic tar header
	I1210 00:05:53.325288   97943 main.go:141] libmachine: (ha-070032) DBG | Writing SSH key tar header
	I1210 00:05:53.325300   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:53.325244   97966 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032 ...
	I1210 00:05:53.325369   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032
	I1210 00:05:53.325394   97943 main.go:141] libmachine: (ha-070032) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032 (perms=drwx------)
	I1210 00:05:53.325428   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines
	I1210 00:05:53.325447   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:05:53.325560   97943 main.go:141] libmachine: (ha-070032) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines (perms=drwxr-xr-x)
	I1210 00:05:53.325599   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135
	I1210 00:05:53.325634   97943 main.go:141] libmachine: (ha-070032) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube (perms=drwxr-xr-x)
	I1210 00:05:53.325659   97943 main.go:141] libmachine: (ha-070032) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135 (perms=drwxrwxr-x)
	I1210 00:05:53.325669   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1210 00:05:53.325681   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home/jenkins
	I1210 00:05:53.325695   97943 main.go:141] libmachine: (ha-070032) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 00:05:53.325703   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home
	I1210 00:05:53.325715   97943 main.go:141] libmachine: (ha-070032) DBG | Skipping /home - not owner
	I1210 00:05:53.325747   97943 main.go:141] libmachine: (ha-070032) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 00:05:53.325763   97943 main.go:141] libmachine: (ha-070032) Creating domain...
	I1210 00:05:53.326682   97943 main.go:141] libmachine: (ha-070032) define libvirt domain using xml: 
	I1210 00:05:53.326699   97943 main.go:141] libmachine: (ha-070032) <domain type='kvm'>
	I1210 00:05:53.326705   97943 main.go:141] libmachine: (ha-070032)   <name>ha-070032</name>
	I1210 00:05:53.326709   97943 main.go:141] libmachine: (ha-070032)   <memory unit='MiB'>2200</memory>
	I1210 00:05:53.326714   97943 main.go:141] libmachine: (ha-070032)   <vcpu>2</vcpu>
	I1210 00:05:53.326718   97943 main.go:141] libmachine: (ha-070032)   <features>
	I1210 00:05:53.326744   97943 main.go:141] libmachine: (ha-070032)     <acpi/>
	I1210 00:05:53.326772   97943 main.go:141] libmachine: (ha-070032)     <apic/>
	I1210 00:05:53.326783   97943 main.go:141] libmachine: (ha-070032)     <pae/>
	I1210 00:05:53.326806   97943 main.go:141] libmachine: (ha-070032)     
	I1210 00:05:53.326826   97943 main.go:141] libmachine: (ha-070032)   </features>
	I1210 00:05:53.326854   97943 main.go:141] libmachine: (ha-070032)   <cpu mode='host-passthrough'>
	I1210 00:05:53.326865   97943 main.go:141] libmachine: (ha-070032)   
	I1210 00:05:53.326872   97943 main.go:141] libmachine: (ha-070032)   </cpu>
	I1210 00:05:53.326882   97943 main.go:141] libmachine: (ha-070032)   <os>
	I1210 00:05:53.326889   97943 main.go:141] libmachine: (ha-070032)     <type>hvm</type>
	I1210 00:05:53.326900   97943 main.go:141] libmachine: (ha-070032)     <boot dev='cdrom'/>
	I1210 00:05:53.326906   97943 main.go:141] libmachine: (ha-070032)     <boot dev='hd'/>
	I1210 00:05:53.326920   97943 main.go:141] libmachine: (ha-070032)     <bootmenu enable='no'/>
	I1210 00:05:53.326944   97943 main.go:141] libmachine: (ha-070032)   </os>
	I1210 00:05:53.326956   97943 main.go:141] libmachine: (ha-070032)   <devices>
	I1210 00:05:53.326966   97943 main.go:141] libmachine: (ha-070032)     <disk type='file' device='cdrom'>
	I1210 00:05:53.326982   97943 main.go:141] libmachine: (ha-070032)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/boot2docker.iso'/>
	I1210 00:05:53.326995   97943 main.go:141] libmachine: (ha-070032)       <target dev='hdc' bus='scsi'/>
	I1210 00:05:53.327012   97943 main.go:141] libmachine: (ha-070032)       <readonly/>
	I1210 00:05:53.327027   97943 main.go:141] libmachine: (ha-070032)     </disk>
	I1210 00:05:53.327039   97943 main.go:141] libmachine: (ha-070032)     <disk type='file' device='disk'>
	I1210 00:05:53.327051   97943 main.go:141] libmachine: (ha-070032)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1210 00:05:53.327066   97943 main.go:141] libmachine: (ha-070032)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/ha-070032.rawdisk'/>
	I1210 00:05:53.327074   97943 main.go:141] libmachine: (ha-070032)       <target dev='hda' bus='virtio'/>
	I1210 00:05:53.327080   97943 main.go:141] libmachine: (ha-070032)     </disk>
	I1210 00:05:53.327086   97943 main.go:141] libmachine: (ha-070032)     <interface type='network'>
	I1210 00:05:53.327091   97943 main.go:141] libmachine: (ha-070032)       <source network='mk-ha-070032'/>
	I1210 00:05:53.327096   97943 main.go:141] libmachine: (ha-070032)       <model type='virtio'/>
	I1210 00:05:53.327101   97943 main.go:141] libmachine: (ha-070032)     </interface>
	I1210 00:05:53.327107   97943 main.go:141] libmachine: (ha-070032)     <interface type='network'>
	I1210 00:05:53.327127   97943 main.go:141] libmachine: (ha-070032)       <source network='default'/>
	I1210 00:05:53.327131   97943 main.go:141] libmachine: (ha-070032)       <model type='virtio'/>
	I1210 00:05:53.327138   97943 main.go:141] libmachine: (ha-070032)     </interface>
	I1210 00:05:53.327142   97943 main.go:141] libmachine: (ha-070032)     <serial type='pty'>
	I1210 00:05:53.327147   97943 main.go:141] libmachine: (ha-070032)       <target port='0'/>
	I1210 00:05:53.327152   97943 main.go:141] libmachine: (ha-070032)     </serial>
	I1210 00:05:53.327157   97943 main.go:141] libmachine: (ha-070032)     <console type='pty'>
	I1210 00:05:53.327167   97943 main.go:141] libmachine: (ha-070032)       <target type='serial' port='0'/>
	I1210 00:05:53.327176   97943 main.go:141] libmachine: (ha-070032)     </console>
	I1210 00:05:53.327183   97943 main.go:141] libmachine: (ha-070032)     <rng model='virtio'>
	I1210 00:05:53.327188   97943 main.go:141] libmachine: (ha-070032)       <backend model='random'>/dev/random</backend>
	I1210 00:05:53.327201   97943 main.go:141] libmachine: (ha-070032)     </rng>
	I1210 00:05:53.327208   97943 main.go:141] libmachine: (ha-070032)     
	I1210 00:05:53.327212   97943 main.go:141] libmachine: (ha-070032)     
	I1210 00:05:53.327219   97943 main.go:141] libmachine: (ha-070032)   </devices>
	I1210 00:05:53.327223   97943 main.go:141] libmachine: (ha-070032) </domain>
	I1210 00:05:53.327229   97943 main.go:141] libmachine: (ha-070032) 
	I1210 00:05:53.331717   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:3e:64:27 in network default
	I1210 00:05:53.332300   97943 main.go:141] libmachine: (ha-070032) Ensuring networks are active...
	I1210 00:05:53.332321   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:53.332935   97943 main.go:141] libmachine: (ha-070032) Ensuring network default is active
	I1210 00:05:53.333268   97943 main.go:141] libmachine: (ha-070032) Ensuring network mk-ha-070032 is active
	I1210 00:05:53.333775   97943 main.go:141] libmachine: (ha-070032) Getting domain xml...
	I1210 00:05:53.334418   97943 main.go:141] libmachine: (ha-070032) Creating domain...
	I1210 00:05:54.486671   97943 main.go:141] libmachine: (ha-070032) Waiting to get IP...
	I1210 00:05:54.487631   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:54.488004   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:54.488023   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:54.487962   97966 retry.go:31] will retry after 250.94638ms: waiting for machine to come up
	I1210 00:05:54.740488   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:54.740898   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:54.740922   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:54.740853   97966 retry.go:31] will retry after 369.652496ms: waiting for machine to come up
	I1210 00:05:55.112670   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:55.113058   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:55.113088   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:55.113006   97966 retry.go:31] will retry after 419.563235ms: waiting for machine to come up
	I1210 00:05:55.534593   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:55.535015   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:55.535042   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:55.534960   97966 retry.go:31] will retry after 426.548067ms: waiting for machine to come up
	I1210 00:05:55.963569   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:55.963962   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:55.963978   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:55.963937   97966 retry.go:31] will retry after 617.965427ms: waiting for machine to come up
	I1210 00:05:56.583725   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:56.584072   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:56.584105   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:56.584063   97966 retry.go:31] will retry after 856.526353ms: waiting for machine to come up
	I1210 00:05:57.442311   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:57.442739   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:57.442796   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:57.442703   97966 retry.go:31] will retry after 1.178569719s: waiting for machine to come up
	I1210 00:05:58.622338   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:58.622797   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:58.622827   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:58.622728   97966 retry.go:31] will retry after 1.42624777s: waiting for machine to come up
	I1210 00:06:00.051240   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:00.051614   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:06:00.051640   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:06:00.051572   97966 retry.go:31] will retry after 1.801666778s: waiting for machine to come up
	I1210 00:06:01.855728   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:01.856159   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:06:01.856181   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:06:01.856123   97966 retry.go:31] will retry after 2.078837624s: waiting for machine to come up
	I1210 00:06:03.936907   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:03.937387   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:06:03.937421   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:06:03.937345   97966 retry.go:31] will retry after 2.395168214s: waiting for machine to come up
	I1210 00:06:06.336012   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:06.336380   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:06:06.336409   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:06:06.336336   97966 retry.go:31] will retry after 2.386978523s: waiting for machine to come up
	I1210 00:06:08.725386   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:08.725781   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:06:08.725809   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:06:08.725749   97966 retry.go:31] will retry after 4.346211813s: waiting for machine to come up
	I1210 00:06:13.073905   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:13.074439   97943 main.go:141] libmachine: (ha-070032) Found IP for machine: 192.168.39.187
	I1210 00:06:13.074469   97943 main.go:141] libmachine: (ha-070032) Reserving static IP address...
	I1210 00:06:13.074487   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has current primary IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:13.075078   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find host DHCP lease matching {name: "ha-070032", mac: "52:54:00:ad:ce:dc", ip: "192.168.39.187"} in network mk-ha-070032
	I1210 00:06:13.145743   97943 main.go:141] libmachine: (ha-070032) DBG | Getting to WaitForSSH function...
	I1210 00:06:13.145776   97943 main.go:141] libmachine: (ha-070032) Reserved static IP address: 192.168.39.187
	I1210 00:06:13.145818   97943 main.go:141] libmachine: (ha-070032) Waiting for SSH to be available...
	I1210 00:06:13.148440   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:13.148825   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032
	I1210 00:06:13.148851   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find defined IP address of network mk-ha-070032 interface with MAC address 52:54:00:ad:ce:dc
	I1210 00:06:13.149012   97943 main.go:141] libmachine: (ha-070032) DBG | Using SSH client type: external
	I1210 00:06:13.149039   97943 main.go:141] libmachine: (ha-070032) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa (-rw-------)
	I1210 00:06:13.149072   97943 main.go:141] libmachine: (ha-070032) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:06:13.149085   97943 main.go:141] libmachine: (ha-070032) DBG | About to run SSH command:
	I1210 00:06:13.149097   97943 main.go:141] libmachine: (ha-070032) DBG | exit 0
	I1210 00:06:13.152933   97943 main.go:141] libmachine: (ha-070032) DBG | SSH cmd err, output: exit status 255: 
	I1210 00:06:13.152951   97943 main.go:141] libmachine: (ha-070032) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1210 00:06:13.152957   97943 main.go:141] libmachine: (ha-070032) DBG | command : exit 0
	I1210 00:06:13.152962   97943 main.go:141] libmachine: (ha-070032) DBG | err     : exit status 255
	I1210 00:06:13.152969   97943 main.go:141] libmachine: (ha-070032) DBG | output  : 
	I1210 00:06:16.155027   97943 main.go:141] libmachine: (ha-070032) DBG | Getting to WaitForSSH function...
	I1210 00:06:16.157296   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.157685   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.157714   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.157840   97943 main.go:141] libmachine: (ha-070032) DBG | Using SSH client type: external
	I1210 00:06:16.157860   97943 main.go:141] libmachine: (ha-070032) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa (-rw-------)
	I1210 00:06:16.157887   97943 main.go:141] libmachine: (ha-070032) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.187 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:06:16.157900   97943 main.go:141] libmachine: (ha-070032) DBG | About to run SSH command:
	I1210 00:06:16.157909   97943 main.go:141] libmachine: (ha-070032) DBG | exit 0
	I1210 00:06:16.278179   97943 main.go:141] libmachine: (ha-070032) DBG | SSH cmd err, output: <nil>: 
	I1210 00:06:16.278456   97943 main.go:141] libmachine: (ha-070032) KVM machine creation complete!
	I1210 00:06:16.278762   97943 main.go:141] libmachine: (ha-070032) Calling .GetConfigRaw
	I1210 00:06:16.279308   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:16.279502   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:16.279643   97943 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1210 00:06:16.279659   97943 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:06:16.280933   97943 main.go:141] libmachine: Detecting operating system of created instance...
	I1210 00:06:16.280956   97943 main.go:141] libmachine: Waiting for SSH to be available...
	I1210 00:06:16.280962   97943 main.go:141] libmachine: Getting to WaitForSSH function...
	I1210 00:06:16.280968   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:16.283215   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.283661   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.283689   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.283820   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:16.283997   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.284144   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.284266   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:16.284430   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:06:16.284659   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:06:16.284672   97943 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1210 00:06:16.381723   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:06:16.381748   97943 main.go:141] libmachine: Detecting the provisioner...
	I1210 00:06:16.381756   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:16.384507   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.384824   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.384850   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.384978   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:16.385166   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.385349   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.385493   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:16.385645   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:06:16.385854   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:06:16.385866   97943 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1210 00:06:16.482791   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1210 00:06:16.482875   97943 main.go:141] libmachine: found compatible host: buildroot
	I1210 00:06:16.482890   97943 main.go:141] libmachine: Provisioning with buildroot...
	I1210 00:06:16.482898   97943 main.go:141] libmachine: (ha-070032) Calling .GetMachineName
	I1210 00:06:16.483155   97943 buildroot.go:166] provisioning hostname "ha-070032"
	I1210 00:06:16.483181   97943 main.go:141] libmachine: (ha-070032) Calling .GetMachineName
	I1210 00:06:16.483360   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:16.485848   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.486193   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.486234   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.486327   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:16.486524   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.486696   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.486841   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:16.486993   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:06:16.487168   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:06:16.487182   97943 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-070032 && echo "ha-070032" | sudo tee /etc/hostname
	I1210 00:06:16.599563   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-070032
	
	I1210 00:06:16.599595   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:16.602261   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.602629   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.602659   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.602789   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:16.603020   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.603241   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.603430   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:16.603599   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:06:16.603761   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:06:16.603781   97943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-070032' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-070032/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-070032' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 00:06:16.710380   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:06:16.710422   97943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 00:06:16.710472   97943 buildroot.go:174] setting up certificates
	I1210 00:06:16.710489   97943 provision.go:84] configureAuth start
	I1210 00:06:16.710503   97943 main.go:141] libmachine: (ha-070032) Calling .GetMachineName
	I1210 00:06:16.710783   97943 main.go:141] libmachine: (ha-070032) Calling .GetIP
	I1210 00:06:16.713296   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.713682   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.713712   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.713807   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:16.716284   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.716639   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.716657   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.716807   97943 provision.go:143] copyHostCerts
	I1210 00:06:16.716848   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:06:16.716882   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 00:06:16.716898   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:06:16.716962   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 00:06:16.717048   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:06:16.717075   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 00:06:16.717082   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:06:16.717107   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 00:06:16.717158   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:06:16.717175   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 00:06:16.717181   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:06:16.717202   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 00:06:16.717253   97943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.ha-070032 san=[127.0.0.1 192.168.39.187 ha-070032 localhost minikube]
	I1210 00:06:16.857455   97943 provision.go:177] copyRemoteCerts
	I1210 00:06:16.857514   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 00:06:16.857542   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:16.860287   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.860660   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.860687   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.860918   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:16.861136   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.861318   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:16.861436   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:06:16.940074   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 00:06:16.940147   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 00:06:16.961938   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 00:06:16.962011   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1210 00:06:16.982947   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 00:06:16.983027   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 00:06:17.003600   97943 provision.go:87] duration metric: took 293.095287ms to configureAuth
	I1210 00:06:17.003631   97943 buildroot.go:189] setting minikube options for container-runtime
	I1210 00:06:17.003823   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:06:17.003908   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:17.006244   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.006580   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.006608   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.006735   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:17.006932   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.007076   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.007191   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:17.007315   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:06:17.007484   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:06:17.007502   97943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 00:06:17.211708   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 00:06:17.211741   97943 main.go:141] libmachine: Checking connection to Docker...
	I1210 00:06:17.211753   97943 main.go:141] libmachine: (ha-070032) Calling .GetURL
	I1210 00:06:17.212951   97943 main.go:141] libmachine: (ha-070032) DBG | Using libvirt version 6000000
	I1210 00:06:17.215245   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.215611   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.215644   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.215769   97943 main.go:141] libmachine: Docker is up and running!
	I1210 00:06:17.215785   97943 main.go:141] libmachine: Reticulating splines...
	I1210 00:06:17.215796   97943 client.go:171] duration metric: took 24.337498941s to LocalClient.Create
	I1210 00:06:17.215826   97943 start.go:167] duration metric: took 24.337582238s to libmachine.API.Create "ha-070032"
	I1210 00:06:17.215839   97943 start.go:293] postStartSetup for "ha-070032" (driver="kvm2")
	I1210 00:06:17.215862   97943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 00:06:17.215886   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:17.216149   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 00:06:17.216177   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:17.218250   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.218590   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.218632   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.218752   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:17.218921   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.219062   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:17.219188   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:06:17.296211   97943 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 00:06:17.300251   97943 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 00:06:17.300276   97943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 00:06:17.300345   97943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 00:06:17.300421   97943 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 00:06:17.300431   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /etc/ssl/certs/862962.pem
	I1210 00:06:17.300529   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 00:06:17.308961   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:06:17.331496   97943 start.go:296] duration metric: took 115.636437ms for postStartSetup
	I1210 00:06:17.331591   97943 main.go:141] libmachine: (ha-070032) Calling .GetConfigRaw
	I1210 00:06:17.332201   97943 main.go:141] libmachine: (ha-070032) Calling .GetIP
	I1210 00:06:17.335151   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.335527   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.335569   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.335747   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:06:17.335921   97943 start.go:128] duration metric: took 24.475725142s to createHost
	I1210 00:06:17.335945   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:17.338044   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.338384   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.338412   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.338541   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:17.338741   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.338882   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.339001   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:17.339163   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:06:17.339337   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:06:17.339348   97943 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 00:06:17.439329   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733789177.417194070
	
	I1210 00:06:17.439361   97943 fix.go:216] guest clock: 1733789177.417194070
	I1210 00:06:17.439372   97943 fix.go:229] Guest: 2024-12-10 00:06:17.41719407 +0000 UTC Remote: 2024-12-10 00:06:17.335933593 +0000 UTC m=+24.582014233 (delta=81.260477ms)
	I1210 00:06:17.439408   97943 fix.go:200] guest clock delta is within tolerance: 81.260477ms
	I1210 00:06:17.439416   97943 start.go:83] releasing machines lock for "ha-070032", held for 24.579311872s
	I1210 00:06:17.439440   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:17.439778   97943 main.go:141] libmachine: (ha-070032) Calling .GetIP
	I1210 00:06:17.442802   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.443261   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.443289   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.443497   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:17.444002   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:17.444206   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:17.444324   97943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 00:06:17.444401   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:17.444474   97943 ssh_runner.go:195] Run: cat /version.json
	I1210 00:06:17.444500   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:17.446933   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.447294   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.447320   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.447352   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.447499   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:17.447688   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.447744   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.447772   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.447844   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:17.447953   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:17.448103   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.448103   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:06:17.448278   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:17.448402   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:06:17.553500   97943 ssh_runner.go:195] Run: systemctl --version
	I1210 00:06:17.559183   97943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 00:06:17.714099   97943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 00:06:17.720445   97943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 00:06:17.720522   97943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 00:06:17.735693   97943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 00:06:17.735715   97943 start.go:495] detecting cgroup driver to use...
	I1210 00:06:17.735777   97943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 00:06:17.750781   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 00:06:17.763333   97943 docker.go:217] disabling cri-docker service (if available) ...
	I1210 00:06:17.763379   97943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 00:06:17.775483   97943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 00:06:17.787288   97943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 00:06:17.890184   97943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 00:06:18.028147   97943 docker.go:233] disabling docker service ...
	I1210 00:06:18.028234   97943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 00:06:18.041611   97943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 00:06:18.054485   97943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 00:06:18.194456   97943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 00:06:18.314202   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 00:06:18.327181   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 00:06:18.343918   97943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 00:06:18.343989   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.353427   97943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 00:06:18.353489   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.362859   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.371991   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.381017   97943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 00:06:18.391381   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.401252   97943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.416290   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.426233   97943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 00:06:18.435267   97943 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 00:06:18.435316   97943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 00:06:18.447946   97943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 00:06:18.456951   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:06:18.573205   97943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 00:06:18.656643   97943 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 00:06:18.656726   97943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 00:06:18.661011   97943 start.go:563] Will wait 60s for crictl version
	I1210 00:06:18.661071   97943 ssh_runner.go:195] Run: which crictl
	I1210 00:06:18.664478   97943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:06:18.701494   97943 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 00:06:18.701578   97943 ssh_runner.go:195] Run: crio --version
	I1210 00:06:18.727238   97943 ssh_runner.go:195] Run: crio --version
	I1210 00:06:18.753327   97943 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 00:06:18.754595   97943 main.go:141] libmachine: (ha-070032) Calling .GetIP
	I1210 00:06:18.756947   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:18.757200   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:18.757235   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:18.757445   97943 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 00:06:18.760940   97943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:06:18.772727   97943 kubeadm.go:883] updating cluster {Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 00:06:18.772828   97943 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:06:18.772879   97943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:06:18.804204   97943 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 00:06:18.804265   97943 ssh_runner.go:195] Run: which lz4
	I1210 00:06:18.807579   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1210 00:06:18.807670   97943 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 00:06:18.811358   97943 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 00:06:18.811386   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1210 00:06:19.965583   97943 crio.go:462] duration metric: took 1.157944737s to copy over tarball
	I1210 00:06:19.965660   97943 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 00:06:21.934864   97943 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.969164039s)
	I1210 00:06:21.934896   97943 crio.go:469] duration metric: took 1.969285734s to extract the tarball
	I1210 00:06:21.934906   97943 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 00:06:21.970025   97943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:06:22.022669   97943 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 00:06:22.022692   97943 cache_images.go:84] Images are preloaded, skipping loading
	I1210 00:06:22.022702   97943 kubeadm.go:934] updating node { 192.168.39.187 8443 v1.31.2 crio true true} ...
	I1210 00:06:22.022843   97943 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-070032 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:06:22.022948   97943 ssh_runner.go:195] Run: crio config
	I1210 00:06:22.066130   97943 cni.go:84] Creating CNI manager for ""
	I1210 00:06:22.066152   97943 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1210 00:06:22.066160   97943 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 00:06:22.066182   97943 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.187 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-070032 NodeName:ha-070032 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 00:06:22.066308   97943 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.187
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-070032"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.187"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.187"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 00:06:22.066339   97943 kube-vip.go:115] generating kube-vip config ...
	I1210 00:06:22.066403   97943 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1210 00:06:22.080860   97943 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1210 00:06:22.080973   97943 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1210 00:06:22.081051   97943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:06:22.089866   97943 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 00:06:22.089923   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1210 00:06:22.098290   97943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1210 00:06:22.112742   97943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:06:22.127069   97943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1210 00:06:22.141317   97943 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1210 00:06:22.155689   97943 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1210 00:06:22.159003   97943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:06:22.169321   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:06:22.288035   97943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:06:22.303534   97943 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032 for IP: 192.168.39.187
	I1210 00:06:22.303559   97943 certs.go:194] generating shared ca certs ...
	I1210 00:06:22.303580   97943 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:22.303764   97943 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 00:06:22.303807   97943 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 00:06:22.303816   97943 certs.go:256] generating profile certs ...
	I1210 00:06:22.303867   97943 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key
	I1210 00:06:22.303881   97943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.crt with IP's: []
	I1210 00:06:22.579094   97943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.crt ...
	I1210 00:06:22.579127   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.crt: {Name:mk6da1df398501169ebaa4be6e0991a8cdf439ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:22.579330   97943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key ...
	I1210 00:06:22.579344   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key: {Name:mkcfad0deb7a44a0416ffc9ec52ed32ba5314a9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:22.579449   97943 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.e24980b8
	I1210 00:06:22.579465   97943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.e24980b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.187 192.168.39.254]
	I1210 00:06:22.676685   97943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.e24980b8 ...
	I1210 00:06:22.676712   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.e24980b8: {Name:mke16dbfb98e7219f2bbc6176b557aae983cf59d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:22.676895   97943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.e24980b8 ...
	I1210 00:06:22.676911   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.e24980b8: {Name:mke38a755e8856925c614e9671ffbd341e4bacfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:22.677005   97943 certs.go:381] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.e24980b8 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt
	I1210 00:06:22.677102   97943 certs.go:385] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.e24980b8 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key
	I1210 00:06:22.677175   97943 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key
	I1210 00:06:22.677191   97943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt with IP's: []
	I1210 00:06:23.248653   97943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt ...
	I1210 00:06:23.248694   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt: {Name:mk109f5f541d0487f6eee37e10618be0687d2257 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:23.248940   97943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key ...
	I1210 00:06:23.248958   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key: {Name:mkb6a55c3dbe59a4c5c10d115460729fd5017c90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:23.249084   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 00:06:23.249122   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 00:06:23.249145   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 00:06:23.249169   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 00:06:23.249185   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 00:06:23.249208   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 00:06:23.249231   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 00:06:23.249252   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 00:06:23.249332   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 00:06:23.249393   97943 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 00:06:23.249407   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:06:23.249449   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 00:06:23.249487   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:06:23.249528   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 00:06:23.249593   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:06:23.249643   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:06:23.249668   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem -> /usr/share/ca-certificates/86296.pem
	I1210 00:06:23.249692   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /usr/share/ca-certificates/862962.pem
	I1210 00:06:23.250316   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:06:23.282882   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 00:06:23.307116   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:06:23.329842   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:06:23.350860   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 00:06:23.371360   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 00:06:23.391801   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:06:23.412467   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:06:23.433690   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:06:23.454439   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 00:06:23.475132   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 00:06:23.495728   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 00:06:23.510105   97943 ssh_runner.go:195] Run: openssl version
	I1210 00:06:23.515363   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:06:23.524990   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:06:23.528859   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:06:23.528911   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:06:23.534177   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:06:23.544011   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 00:06:23.554049   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 00:06:23.558290   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 00:06:23.558341   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 00:06:23.563770   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 00:06:23.574235   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 00:06:23.584591   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 00:06:23.588826   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 00:06:23.588880   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 00:06:23.594177   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:06:23.604355   97943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:06:23.608126   97943 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 00:06:23.608176   97943 kubeadm.go:392] StartCluster: {Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:06:23.608256   97943 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 00:06:23.608313   97943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:06:23.644503   97943 cri.go:89] found id: ""
	I1210 00:06:23.644571   97943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 00:06:23.653924   97943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:06:23.666641   97943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:06:23.677490   97943 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:06:23.677512   97943 kubeadm.go:157] found existing configuration files:
	
	I1210 00:06:23.677553   97943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:06:23.685837   97943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:06:23.685897   97943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:06:23.696600   97943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:06:23.706796   97943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:06:23.706854   97943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:06:23.717362   97943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:06:23.727400   97943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:06:23.727453   97943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:06:23.737844   97943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:06:23.747833   97943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:06:23.747889   97943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:06:23.758170   97943 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:06:23.860329   97943 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 00:06:23.860398   97943 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:06:23.982444   97943 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:06:23.982606   97943 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:06:23.982761   97943 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 00:06:23.992051   97943 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:06:24.260435   97943 out.go:235]   - Generating certificates and keys ...
	I1210 00:06:24.260672   97943 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:06:24.260758   97943 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:06:24.260858   97943 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 00:06:24.290159   97943 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1210 00:06:24.463743   97943 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1210 00:06:24.802277   97943 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1210 00:06:24.950429   97943 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1210 00:06:24.950692   97943 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-070032 localhost] and IPs [192.168.39.187 127.0.0.1 ::1]
	I1210 00:06:25.094704   97943 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1210 00:06:25.094857   97943 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-070032 localhost] and IPs [192.168.39.187 127.0.0.1 ::1]
	I1210 00:06:25.315955   97943 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 00:06:25.908434   97943 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 00:06:26.061724   97943 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1210 00:06:26.061977   97943 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:06:26.261701   97943 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:06:26.508681   97943 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 00:06:26.626369   97943 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:06:26.773060   97943 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:06:26.898048   97943 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:06:26.900096   97943 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:06:26.903197   97943 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:06:26.904929   97943 out.go:235]   - Booting up control plane ...
	I1210 00:06:26.905029   97943 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:06:26.905121   97943 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:06:26.905279   97943 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:06:26.919661   97943 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:06:26.926359   97943 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:06:26.926414   97943 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:06:27.050156   97943 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 00:06:27.050350   97943 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 00:06:27.551278   97943 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.620144ms
	I1210 00:06:27.551408   97943 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 00:06:33.591605   97943 kubeadm.go:310] [api-check] The API server is healthy after 6.043312277s
	I1210 00:06:33.609669   97943 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 00:06:33.625260   97943 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 00:06:33.653756   97943 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 00:06:33.653955   97943 kubeadm.go:310] [mark-control-plane] Marking the node ha-070032 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 00:06:33.666679   97943 kubeadm.go:310] [bootstrap-token] Using token: j34izu.9ybowi8hhzn9pxj2
	I1210 00:06:33.668028   97943 out.go:235]   - Configuring RBAC rules ...
	I1210 00:06:33.668176   97943 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 00:06:33.684358   97943 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 00:06:33.695755   97943 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 00:06:33.698959   97943 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 00:06:33.704573   97943 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 00:06:33.710289   97943 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 00:06:34.000325   97943 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 00:06:34.440225   97943 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 00:06:35.001489   97943 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 00:06:35.002397   97943 kubeadm.go:310] 
	I1210 00:06:35.002481   97943 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 00:06:35.002492   97943 kubeadm.go:310] 
	I1210 00:06:35.002620   97943 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 00:06:35.002641   97943 kubeadm.go:310] 
	I1210 00:06:35.002668   97943 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 00:06:35.002729   97943 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 00:06:35.002789   97943 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 00:06:35.002807   97943 kubeadm.go:310] 
	I1210 00:06:35.002880   97943 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 00:06:35.002909   97943 kubeadm.go:310] 
	I1210 00:06:35.002973   97943 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 00:06:35.002982   97943 kubeadm.go:310] 
	I1210 00:06:35.003062   97943 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 00:06:35.003170   97943 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 00:06:35.003276   97943 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 00:06:35.003287   97943 kubeadm.go:310] 
	I1210 00:06:35.003407   97943 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 00:06:35.003521   97943 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 00:06:35.003539   97943 kubeadm.go:310] 
	I1210 00:06:35.003652   97943 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token j34izu.9ybowi8hhzn9pxj2 \
	I1210 00:06:35.003744   97943 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 \
	I1210 00:06:35.003795   97943 kubeadm.go:310] 	--control-plane 
	I1210 00:06:35.003809   97943 kubeadm.go:310] 
	I1210 00:06:35.003925   97943 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 00:06:35.003934   97943 kubeadm.go:310] 
	I1210 00:06:35.004033   97943 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token j34izu.9ybowi8hhzn9pxj2 \
	I1210 00:06:35.004174   97943 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 
	I1210 00:06:35.004857   97943 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:06:35.005000   97943 cni.go:84] Creating CNI manager for ""
	I1210 00:06:35.005014   97943 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1210 00:06:35.006644   97943 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1210 00:06:35.007773   97943 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 00:06:35.013278   97943 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1210 00:06:35.013292   97943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 00:06:35.030575   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 00:06:35.430253   97943 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 00:06:35.430379   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-070032 minikube.k8s.io/updated_at=2024_12_10T00_06_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=ha-070032 minikube.k8s.io/primary=true
	I1210 00:06:35.430379   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:35.453581   97943 ops.go:34] apiserver oom_adj: -16
	I1210 00:06:35.589407   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:36.090147   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:36.590386   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:37.089563   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:37.589509   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:38.090045   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:38.590492   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:38.670226   97943 kubeadm.go:1113] duration metric: took 3.23992517s to wait for elevateKubeSystemPrivileges
	I1210 00:06:38.670279   97943 kubeadm.go:394] duration metric: took 15.062107151s to StartCluster
	I1210 00:06:38.670305   97943 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:38.670408   97943 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:06:38.671197   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:38.671402   97943 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:06:38.671412   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 00:06:38.671420   97943 start.go:241] waiting for startup goroutines ...
	I1210 00:06:38.671426   97943 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 00:06:38.671508   97943 addons.go:69] Setting storage-provisioner=true in profile "ha-070032"
	I1210 00:06:38.671518   97943 addons.go:69] Setting default-storageclass=true in profile "ha-070032"
	I1210 00:06:38.671525   97943 addons.go:234] Setting addon storage-provisioner=true in "ha-070032"
	I1210 00:06:38.671543   97943 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-070032"
	I1210 00:06:38.671557   97943 host.go:66] Checking if "ha-070032" exists ...
	I1210 00:06:38.671580   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:06:38.671976   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:06:38.672006   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:06:38.672032   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:38.672011   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:38.687036   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41675
	I1210 00:06:38.687249   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36941
	I1210 00:06:38.687528   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:38.687798   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:38.688109   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:06:38.688138   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:38.688273   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:06:38.688294   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:38.688523   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:38.688665   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:38.688726   97943 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:06:38.689111   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:06:38.689137   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:38.690837   97943 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:06:38.691061   97943 kapi.go:59] client config for ha-070032: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.crt", KeyFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key", CAFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 00:06:38.691470   97943 cert_rotation.go:140] Starting client certificate rotation controller
	I1210 00:06:38.691733   97943 addons.go:234] Setting addon default-storageclass=true in "ha-070032"
	I1210 00:06:38.691777   97943 host.go:66] Checking if "ha-070032" exists ...
	I1210 00:06:38.692023   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:06:38.692051   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:38.704916   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43013
	I1210 00:06:38.705299   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:38.705773   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:06:38.705793   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:38.705818   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43161
	I1210 00:06:38.706223   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:38.706266   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:38.706378   97943 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:06:38.706814   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:06:38.706838   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:38.707185   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:38.707762   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:06:38.707794   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:38.707810   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:38.709839   97943 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:06:38.711065   97943 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:06:38.711090   97943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 00:06:38.711109   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:38.713927   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:38.714361   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:38.714394   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:38.714642   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:38.714813   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:38.715016   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:38.715175   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:06:38.722431   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33307
	I1210 00:06:38.722864   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:38.723276   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:06:38.723296   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:38.723661   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:38.723828   97943 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:06:38.725166   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:38.725377   97943 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 00:06:38.725391   97943 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 00:06:38.725405   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:38.727990   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:38.728394   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:38.728425   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:38.728556   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:38.728718   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:38.728851   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:38.729006   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:06:38.796897   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 00:06:38.828298   97943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:06:38.901174   97943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 00:06:39.211073   97943 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1210 00:06:39.326332   97943 main.go:141] libmachine: Making call to close driver server
	I1210 00:06:39.326356   97943 main.go:141] libmachine: (ha-070032) Calling .Close
	I1210 00:06:39.326414   97943 main.go:141] libmachine: Making call to close driver server
	I1210 00:06:39.326438   97943 main.go:141] libmachine: (ha-070032) Calling .Close
	I1210 00:06:39.326675   97943 main.go:141] libmachine: (ha-070032) DBG | Closing plugin on server side
	I1210 00:06:39.326704   97943 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:06:39.326718   97943 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:06:39.326722   97943 main.go:141] libmachine: (ha-070032) DBG | Closing plugin on server side
	I1210 00:06:39.326732   97943 main.go:141] libmachine: Making call to close driver server
	I1210 00:06:39.326740   97943 main.go:141] libmachine: (ha-070032) Calling .Close
	I1210 00:06:39.326767   97943 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:06:39.326783   97943 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:06:39.326792   97943 main.go:141] libmachine: Making call to close driver server
	I1210 00:06:39.326799   97943 main.go:141] libmachine: (ha-070032) Calling .Close
	I1210 00:06:39.326952   97943 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:06:39.326963   97943 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:06:39.327027   97943 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:06:39.327032   97943 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 00:06:39.327042   97943 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:06:39.327048   97943 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 00:06:39.327148   97943 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1210 00:06:39.327161   97943 round_trippers.go:469] Request Headers:
	I1210 00:06:39.327179   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:06:39.327194   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:06:39.340698   97943 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1210 00:06:39.341273   97943 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1210 00:06:39.341288   97943 round_trippers.go:469] Request Headers:
	I1210 00:06:39.341295   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:06:39.341298   97943 round_trippers.go:473]     Content-Type: application/json
	I1210 00:06:39.341303   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:06:39.344902   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:06:39.345090   97943 main.go:141] libmachine: Making call to close driver server
	I1210 00:06:39.345105   97943 main.go:141] libmachine: (ha-070032) Calling .Close
	I1210 00:06:39.345391   97943 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:06:39.345413   97943 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:06:39.345420   97943 main.go:141] libmachine: (ha-070032) DBG | Closing plugin on server side
	I1210 00:06:39.347624   97943 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1210 00:06:39.348926   97943 addons.go:510] duration metric: took 677.497681ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 00:06:39.348959   97943 start.go:246] waiting for cluster config update ...
	I1210 00:06:39.348973   97943 start.go:255] writing updated cluster config ...
	I1210 00:06:39.350585   97943 out.go:201] 
	I1210 00:06:39.351879   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:06:39.351939   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:06:39.353507   97943 out.go:177] * Starting "ha-070032-m02" control-plane node in "ha-070032" cluster
	I1210 00:06:39.354653   97943 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:06:39.354670   97943 cache.go:56] Caching tarball of preloaded images
	I1210 00:06:39.354757   97943 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 00:06:39.354768   97943 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 00:06:39.354822   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:06:39.354986   97943 start.go:360] acquireMachinesLock for ha-070032-m02: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:06:39.355029   97943 start.go:364] duration metric: took 24.389µs to acquireMachinesLock for "ha-070032-m02"
	I1210 00:06:39.355043   97943 start.go:93] Provisioning new machine with config: &{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:06:39.355103   97943 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1210 00:06:39.356785   97943 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1210 00:06:39.356859   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:06:39.356884   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:39.373740   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41069
	I1210 00:06:39.374206   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:39.374743   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:06:39.374764   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:39.375056   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:39.375244   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetMachineName
	I1210 00:06:39.375358   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:06:39.375496   97943 start.go:159] libmachine.API.Create for "ha-070032" (driver="kvm2")
	I1210 00:06:39.375520   97943 client.go:168] LocalClient.Create starting
	I1210 00:06:39.375545   97943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem
	I1210 00:06:39.375577   97943 main.go:141] libmachine: Decoding PEM data...
	I1210 00:06:39.375591   97943 main.go:141] libmachine: Parsing certificate...
	I1210 00:06:39.375644   97943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem
	I1210 00:06:39.375662   97943 main.go:141] libmachine: Decoding PEM data...
	I1210 00:06:39.375672   97943 main.go:141] libmachine: Parsing certificate...
	I1210 00:06:39.375686   97943 main.go:141] libmachine: Running pre-create checks...
	I1210 00:06:39.375694   97943 main.go:141] libmachine: (ha-070032-m02) Calling .PreCreateCheck
	I1210 00:06:39.375822   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetConfigRaw
	I1210 00:06:39.376224   97943 main.go:141] libmachine: Creating machine...
	I1210 00:06:39.376240   97943 main.go:141] libmachine: (ha-070032-m02) Calling .Create
	I1210 00:06:39.376365   97943 main.go:141] libmachine: (ha-070032-m02) Creating KVM machine...
	I1210 00:06:39.377639   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found existing default KVM network
	I1210 00:06:39.377788   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found existing private KVM network mk-ha-070032
	I1210 00:06:39.377977   97943 main.go:141] libmachine: (ha-070032-m02) Setting up store path in /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02 ...
	I1210 00:06:39.378006   97943 main.go:141] libmachine: (ha-070032-m02) Building disk image from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1210 00:06:39.378048   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:39.377952   98310 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:06:39.378126   97943 main.go:141] libmachine: (ha-070032-m02) Downloading /home/jenkins/minikube-integration/20062-79135/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1210 00:06:39.655003   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:39.654863   98310 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa...
	I1210 00:06:39.917373   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:39.917261   98310 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/ha-070032-m02.rawdisk...
	I1210 00:06:39.917409   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Writing magic tar header
	I1210 00:06:39.917424   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Writing SSH key tar header
	I1210 00:06:39.917437   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:39.917371   98310 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02 ...
	I1210 00:06:39.917498   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02
	I1210 00:06:39.917529   97943 main.go:141] libmachine: (ha-070032-m02) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02 (perms=drwx------)
	I1210 00:06:39.917548   97943 main.go:141] libmachine: (ha-070032-m02) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines (perms=drwxr-xr-x)
	I1210 00:06:39.917560   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines
	I1210 00:06:39.917572   97943 main.go:141] libmachine: (ha-070032-m02) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube (perms=drwxr-xr-x)
	I1210 00:06:39.917584   97943 main.go:141] libmachine: (ha-070032-m02) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135 (perms=drwxrwxr-x)
	I1210 00:06:39.917605   97943 main.go:141] libmachine: (ha-070032-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 00:06:39.917616   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:06:39.917629   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135
	I1210 00:06:39.917642   97943 main.go:141] libmachine: (ha-070032-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 00:06:39.917652   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1210 00:06:39.917664   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home/jenkins
	I1210 00:06:39.917673   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home
	I1210 00:06:39.917683   97943 main.go:141] libmachine: (ha-070032-m02) Creating domain...
	I1210 00:06:39.917707   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Skipping /home - not owner
	I1210 00:06:39.918676   97943 main.go:141] libmachine: (ha-070032-m02) define libvirt domain using xml: 
	I1210 00:06:39.918698   97943 main.go:141] libmachine: (ha-070032-m02) <domain type='kvm'>
	I1210 00:06:39.918768   97943 main.go:141] libmachine: (ha-070032-m02)   <name>ha-070032-m02</name>
	I1210 00:06:39.918816   97943 main.go:141] libmachine: (ha-070032-m02)   <memory unit='MiB'>2200</memory>
	I1210 00:06:39.918844   97943 main.go:141] libmachine: (ha-070032-m02)   <vcpu>2</vcpu>
	I1210 00:06:39.918860   97943 main.go:141] libmachine: (ha-070032-m02)   <features>
	I1210 00:06:39.918868   97943 main.go:141] libmachine: (ha-070032-m02)     <acpi/>
	I1210 00:06:39.918874   97943 main.go:141] libmachine: (ha-070032-m02)     <apic/>
	I1210 00:06:39.918881   97943 main.go:141] libmachine: (ha-070032-m02)     <pae/>
	I1210 00:06:39.918890   97943 main.go:141] libmachine: (ha-070032-m02)     
	I1210 00:06:39.918898   97943 main.go:141] libmachine: (ha-070032-m02)   </features>
	I1210 00:06:39.918908   97943 main.go:141] libmachine: (ha-070032-m02)   <cpu mode='host-passthrough'>
	I1210 00:06:39.918914   97943 main.go:141] libmachine: (ha-070032-m02)   
	I1210 00:06:39.918920   97943 main.go:141] libmachine: (ha-070032-m02)   </cpu>
	I1210 00:06:39.918932   97943 main.go:141] libmachine: (ha-070032-m02)   <os>
	I1210 00:06:39.918939   97943 main.go:141] libmachine: (ha-070032-m02)     <type>hvm</type>
	I1210 00:06:39.918951   97943 main.go:141] libmachine: (ha-070032-m02)     <boot dev='cdrom'/>
	I1210 00:06:39.918960   97943 main.go:141] libmachine: (ha-070032-m02)     <boot dev='hd'/>
	I1210 00:06:39.918969   97943 main.go:141] libmachine: (ha-070032-m02)     <bootmenu enable='no'/>
	I1210 00:06:39.918978   97943 main.go:141] libmachine: (ha-070032-m02)   </os>
	I1210 00:06:39.918985   97943 main.go:141] libmachine: (ha-070032-m02)   <devices>
	I1210 00:06:39.918996   97943 main.go:141] libmachine: (ha-070032-m02)     <disk type='file' device='cdrom'>
	I1210 00:06:39.919011   97943 main.go:141] libmachine: (ha-070032-m02)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/boot2docker.iso'/>
	I1210 00:06:39.919023   97943 main.go:141] libmachine: (ha-070032-m02)       <target dev='hdc' bus='scsi'/>
	I1210 00:06:39.919034   97943 main.go:141] libmachine: (ha-070032-m02)       <readonly/>
	I1210 00:06:39.919044   97943 main.go:141] libmachine: (ha-070032-m02)     </disk>
	I1210 00:06:39.919053   97943 main.go:141] libmachine: (ha-070032-m02)     <disk type='file' device='disk'>
	I1210 00:06:39.919066   97943 main.go:141] libmachine: (ha-070032-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1210 00:06:39.919085   97943 main.go:141] libmachine: (ha-070032-m02)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/ha-070032-m02.rawdisk'/>
	I1210 00:06:39.919096   97943 main.go:141] libmachine: (ha-070032-m02)       <target dev='hda' bus='virtio'/>
	I1210 00:06:39.919106   97943 main.go:141] libmachine: (ha-070032-m02)     </disk>
	I1210 00:06:39.919113   97943 main.go:141] libmachine: (ha-070032-m02)     <interface type='network'>
	I1210 00:06:39.919121   97943 main.go:141] libmachine: (ha-070032-m02)       <source network='mk-ha-070032'/>
	I1210 00:06:39.919132   97943 main.go:141] libmachine: (ha-070032-m02)       <model type='virtio'/>
	I1210 00:06:39.919140   97943 main.go:141] libmachine: (ha-070032-m02)     </interface>
	I1210 00:06:39.919150   97943 main.go:141] libmachine: (ha-070032-m02)     <interface type='network'>
	I1210 00:06:39.919158   97943 main.go:141] libmachine: (ha-070032-m02)       <source network='default'/>
	I1210 00:06:39.919168   97943 main.go:141] libmachine: (ha-070032-m02)       <model type='virtio'/>
	I1210 00:06:39.919177   97943 main.go:141] libmachine: (ha-070032-m02)     </interface>
	I1210 00:06:39.919187   97943 main.go:141] libmachine: (ha-070032-m02)     <serial type='pty'>
	I1210 00:06:39.919201   97943 main.go:141] libmachine: (ha-070032-m02)       <target port='0'/>
	I1210 00:06:39.919211   97943 main.go:141] libmachine: (ha-070032-m02)     </serial>
	I1210 00:06:39.919220   97943 main.go:141] libmachine: (ha-070032-m02)     <console type='pty'>
	I1210 00:06:39.919230   97943 main.go:141] libmachine: (ha-070032-m02)       <target type='serial' port='0'/>
	I1210 00:06:39.919239   97943 main.go:141] libmachine: (ha-070032-m02)     </console>
	I1210 00:06:39.919249   97943 main.go:141] libmachine: (ha-070032-m02)     <rng model='virtio'>
	I1210 00:06:39.919261   97943 main.go:141] libmachine: (ha-070032-m02)       <backend model='random'>/dev/random</backend>
	I1210 00:06:39.919271   97943 main.go:141] libmachine: (ha-070032-m02)     </rng>
	I1210 00:06:39.919278   97943 main.go:141] libmachine: (ha-070032-m02)     
	I1210 00:06:39.919287   97943 main.go:141] libmachine: (ha-070032-m02)     
	I1210 00:06:39.919296   97943 main.go:141] libmachine: (ha-070032-m02)   </devices>
	I1210 00:06:39.919305   97943 main.go:141] libmachine: (ha-070032-m02) </domain>
	I1210 00:06:39.919315   97943 main.go:141] libmachine: (ha-070032-m02) 
	I1210 00:06:39.926117   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:48:53:e3 in network default
	I1210 00:06:39.926859   97943 main.go:141] libmachine: (ha-070032-m02) Ensuring networks are active...
	I1210 00:06:39.926888   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:39.927703   97943 main.go:141] libmachine: (ha-070032-m02) Ensuring network default is active
	I1210 00:06:39.928027   97943 main.go:141] libmachine: (ha-070032-m02) Ensuring network mk-ha-070032 is active
	I1210 00:06:39.928408   97943 main.go:141] libmachine: (ha-070032-m02) Getting domain xml...
	I1210 00:06:39.929223   97943 main.go:141] libmachine: (ha-070032-m02) Creating domain...
	I1210 00:06:41.130495   97943 main.go:141] libmachine: (ha-070032-m02) Waiting to get IP...
	I1210 00:06:41.131359   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:41.131738   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:41.131767   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:41.131705   98310 retry.go:31] will retry after 310.664463ms: waiting for machine to come up
	I1210 00:06:41.444273   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:41.444703   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:41.444737   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:41.444646   98310 retry.go:31] will retry after 238.189723ms: waiting for machine to come up
	I1210 00:06:41.683967   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:41.684372   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:41.684404   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:41.684311   98310 retry.go:31] will retry after 302.841079ms: waiting for machine to come up
	I1210 00:06:41.988975   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:41.989468   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:41.989592   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:41.989406   98310 retry.go:31] will retry after 546.191287ms: waiting for machine to come up
	I1210 00:06:42.536796   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:42.537343   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:42.537376   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:42.537279   98310 retry.go:31] will retry after 759.959183ms: waiting for machine to come up
	I1210 00:06:43.299192   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:43.299592   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:43.299618   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:43.299550   98310 retry.go:31] will retry after 662.514804ms: waiting for machine to come up
	I1210 00:06:43.963192   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:43.963574   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:43.963604   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:43.963510   98310 retry.go:31] will retry after 928.068602ms: waiting for machine to come up
	I1210 00:06:44.892786   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:44.893282   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:44.893308   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:44.893234   98310 retry.go:31] will retry after 1.121647824s: waiting for machine to come up
	I1210 00:06:46.016637   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:46.017063   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:46.017120   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:46.017054   98310 retry.go:31] will retry after 1.26533881s: waiting for machine to come up
	I1210 00:06:47.283663   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:47.284077   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:47.284103   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:47.284029   98310 retry.go:31] will retry after 1.959318884s: waiting for machine to come up
	I1210 00:06:49.245134   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:49.245690   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:49.245721   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:49.245628   98310 retry.go:31] will retry after 2.080479898s: waiting for machine to come up
	I1210 00:06:51.327593   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:51.327959   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:51.327986   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:51.327912   98310 retry.go:31] will retry after 3.384865721s: waiting for machine to come up
	I1210 00:06:54.714736   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:54.715082   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:54.715116   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:54.715033   98310 retry.go:31] will retry after 4.262963095s: waiting for machine to come up
	I1210 00:06:58.982522   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:58.982919   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:58.982944   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:58.982868   98310 retry.go:31] will retry after 4.754254966s: waiting for machine to come up
	I1210 00:07:03.739570   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:03.740201   97943 main.go:141] libmachine: (ha-070032-m02) Found IP for machine: 192.168.39.198
	I1210 00:07:03.740228   97943 main.go:141] libmachine: (ha-070032-m02) Reserving static IP address...
	I1210 00:07:03.740250   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has current primary IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:03.740875   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find host DHCP lease matching {name: "ha-070032-m02", mac: "52:54:00:a4:53:39", ip: "192.168.39.198"} in network mk-ha-070032
	I1210 00:07:03.810694   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Getting to WaitForSSH function...
	I1210 00:07:03.810726   97943 main.go:141] libmachine: (ha-070032-m02) Reserved static IP address: 192.168.39.198
	I1210 00:07:03.810777   97943 main.go:141] libmachine: (ha-070032-m02) Waiting for SSH to be available...
	I1210 00:07:03.813164   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:03.813481   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032
	I1210 00:07:03.813508   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find defined IP address of network mk-ha-070032 interface with MAC address 52:54:00:a4:53:39
	I1210 00:07:03.813691   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Using SSH client type: external
	I1210 00:07:03.813726   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa (-rw-------)
	I1210 00:07:03.813759   97943 main.go:141] libmachine: (ha-070032-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:07:03.813774   97943 main.go:141] libmachine: (ha-070032-m02) DBG | About to run SSH command:
	I1210 00:07:03.813802   97943 main.go:141] libmachine: (ha-070032-m02) DBG | exit 0
	I1210 00:07:03.817377   97943 main.go:141] libmachine: (ha-070032-m02) DBG | SSH cmd err, output: exit status 255: 
	I1210 00:07:03.817395   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1210 00:07:03.817406   97943 main.go:141] libmachine: (ha-070032-m02) DBG | command : exit 0
	I1210 00:07:03.817413   97943 main.go:141] libmachine: (ha-070032-m02) DBG | err     : exit status 255
	I1210 00:07:03.817429   97943 main.go:141] libmachine: (ha-070032-m02) DBG | output  : 
	I1210 00:07:06.818972   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Getting to WaitForSSH function...
	I1210 00:07:06.821618   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:06.822027   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:06.822055   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:06.822215   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Using SSH client type: external
	I1210 00:07:06.822245   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa (-rw-------)
	I1210 00:07:06.822283   97943 main.go:141] libmachine: (ha-070032-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:07:06.822309   97943 main.go:141] libmachine: (ha-070032-m02) DBG | About to run SSH command:
	I1210 00:07:06.822322   97943 main.go:141] libmachine: (ha-070032-m02) DBG | exit 0
	I1210 00:07:06.950206   97943 main.go:141] libmachine: (ha-070032-m02) DBG | SSH cmd err, output: <nil>: 
	I1210 00:07:06.950523   97943 main.go:141] libmachine: (ha-070032-m02) KVM machine creation complete!
	I1210 00:07:06.950797   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetConfigRaw
	I1210 00:07:06.951365   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:06.951576   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:06.951700   97943 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1210 00:07:06.951712   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetState
	I1210 00:07:06.952852   97943 main.go:141] libmachine: Detecting operating system of created instance...
	I1210 00:07:06.952870   97943 main.go:141] libmachine: Waiting for SSH to be available...
	I1210 00:07:06.952875   97943 main.go:141] libmachine: Getting to WaitForSSH function...
	I1210 00:07:06.952881   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:06.955132   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:06.955556   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:06.955577   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:06.955708   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:06.955904   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:06.956047   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:06.956157   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:06.956344   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:07:06.956613   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1210 00:07:06.956635   97943 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1210 00:07:07.065432   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:07:07.065465   97943 main.go:141] libmachine: Detecting the provisioner...
	I1210 00:07:07.065472   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:07.068281   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.068647   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:07.068676   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.068789   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:07.069000   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:07.069205   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:07.069353   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:07.069507   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:07:07.069682   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1210 00:07:07.069696   97943 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1210 00:07:07.179172   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1210 00:07:07.179254   97943 main.go:141] libmachine: found compatible host: buildroot
	I1210 00:07:07.179270   97943 main.go:141] libmachine: Provisioning with buildroot...
	I1210 00:07:07.179281   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetMachineName
	I1210 00:07:07.179507   97943 buildroot.go:166] provisioning hostname "ha-070032-m02"
	I1210 00:07:07.179525   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetMachineName
	I1210 00:07:07.179714   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:07.182380   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.182709   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:07.182735   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.182903   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:07.183097   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:07.183236   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:07.183392   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:07.183547   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:07:07.183709   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1210 00:07:07.183720   97943 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-070032-m02 && echo "ha-070032-m02" | sudo tee /etc/hostname
	I1210 00:07:07.308107   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-070032-m02
	
	I1210 00:07:07.308157   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:07.310796   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.311128   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:07.311159   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.311367   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:07.311544   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:07.311697   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:07.311834   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:07.312007   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:07:07.312178   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1210 00:07:07.312195   97943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-070032-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-070032-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-070032-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 00:07:07.430746   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:07:07.430783   97943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 00:07:07.430808   97943 buildroot.go:174] setting up certificates
	I1210 00:07:07.430826   97943 provision.go:84] configureAuth start
	I1210 00:07:07.430840   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetMachineName
	I1210 00:07:07.431122   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetIP
	I1210 00:07:07.433939   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.434313   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:07.434337   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.434511   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:07.436908   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.437220   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:07.437245   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.437409   97943 provision.go:143] copyHostCerts
	I1210 00:07:07.437448   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:07:07.437491   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 00:07:07.437503   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:07:07.437576   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 00:07:07.437681   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:07:07.437707   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 00:07:07.437715   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:07:07.437755   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 00:07:07.437820   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:07:07.437852   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 00:07:07.437861   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:07:07.437895   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 00:07:07.437968   97943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.ha-070032-m02 san=[127.0.0.1 192.168.39.198 ha-070032-m02 localhost minikube]
	I1210 00:07:08.044773   97943 provision.go:177] copyRemoteCerts
	I1210 00:07:08.044851   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 00:07:08.044891   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:08.047538   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.047846   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.047877   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.048076   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:08.048336   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.048503   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:08.048649   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa Username:docker}
	I1210 00:07:08.132237   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 00:07:08.132310   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 00:07:08.154520   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 00:07:08.154605   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 00:07:08.175951   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 00:07:08.176034   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 00:07:08.197284   97943 provision.go:87] duration metric: took 766.441651ms to configureAuth
	I1210 00:07:08.197318   97943 buildroot.go:189] setting minikube options for container-runtime
	I1210 00:07:08.197534   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:07:08.197630   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:08.200256   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.200605   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.200631   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.200777   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:08.200956   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.201156   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.201290   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:08.201439   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:07:08.201609   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1210 00:07:08.201622   97943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 00:07:08.422427   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 00:07:08.422470   97943 main.go:141] libmachine: Checking connection to Docker...
	I1210 00:07:08.422479   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetURL
	I1210 00:07:08.423873   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Using libvirt version 6000000
	I1210 00:07:08.426057   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.426388   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.426419   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.426586   97943 main.go:141] libmachine: Docker is up and running!
	I1210 00:07:08.426605   97943 main.go:141] libmachine: Reticulating splines...
	I1210 00:07:08.426616   97943 client.go:171] duration metric: took 29.051087497s to LocalClient.Create
	I1210 00:07:08.426651   97943 start.go:167] duration metric: took 29.051156503s to libmachine.API.Create "ha-070032"
	I1210 00:07:08.426663   97943 start.go:293] postStartSetup for "ha-070032-m02" (driver="kvm2")
	I1210 00:07:08.426676   97943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 00:07:08.426697   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:08.426973   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 00:07:08.427006   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:08.429163   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.429425   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.429445   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.429585   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:08.429771   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.429939   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:08.430073   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa Username:docker}
	I1210 00:07:08.511841   97943 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 00:07:08.515628   97943 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 00:07:08.515647   97943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 00:07:08.515716   97943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 00:07:08.515790   97943 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 00:07:08.515798   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /etc/ssl/certs/862962.pem
	I1210 00:07:08.515877   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 00:07:08.524177   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:07:08.545083   97943 start.go:296] duration metric: took 118.406585ms for postStartSetup
	I1210 00:07:08.545129   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetConfigRaw
	I1210 00:07:08.545727   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetIP
	I1210 00:07:08.548447   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.548762   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.548790   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.549019   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:07:08.549239   97943 start.go:128] duration metric: took 29.194124447s to createHost
	I1210 00:07:08.549263   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:08.551249   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.551581   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.551601   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.551788   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:08.551950   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.552104   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.552224   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:08.552368   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:07:08.552535   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1210 00:07:08.552544   97943 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 00:07:08.658708   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733789228.640009863
	
	I1210 00:07:08.658732   97943 fix.go:216] guest clock: 1733789228.640009863
	I1210 00:07:08.658742   97943 fix.go:229] Guest: 2024-12-10 00:07:08.640009863 +0000 UTC Remote: 2024-12-10 00:07:08.549251378 +0000 UTC m=+75.795332018 (delta=90.758485ms)
	I1210 00:07:08.658764   97943 fix.go:200] guest clock delta is within tolerance: 90.758485ms
	I1210 00:07:08.658772   97943 start.go:83] releasing machines lock for "ha-070032-m02", held for 29.303735455s
	I1210 00:07:08.658798   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:08.659077   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetIP
	I1210 00:07:08.661426   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.661743   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.661779   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.663916   97943 out.go:177] * Found network options:
	I1210 00:07:08.665147   97943 out.go:177]   - NO_PROXY=192.168.39.187
	W1210 00:07:08.666190   97943 proxy.go:119] fail to check proxy env: Error ip not in block
	I1210 00:07:08.666213   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:08.666724   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:08.666867   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:08.666999   97943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 00:07:08.667045   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	W1210 00:07:08.667058   97943 proxy.go:119] fail to check proxy env: Error ip not in block
	I1210 00:07:08.667145   97943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 00:07:08.667170   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:08.669614   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.669829   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.669978   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.670007   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.670104   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:08.670217   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.670241   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.670281   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.670437   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:08.670446   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:08.670629   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.670648   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa Username:docker}
	I1210 00:07:08.670779   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:08.670926   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa Username:docker}
	I1210 00:07:08.901492   97943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 00:07:08.907747   97943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 00:07:08.907817   97943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 00:07:08.923205   97943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 00:07:08.923229   97943 start.go:495] detecting cgroup driver to use...
	I1210 00:07:08.923295   97943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 00:07:08.937553   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 00:07:08.950281   97943 docker.go:217] disabling cri-docker service (if available) ...
	I1210 00:07:08.950346   97943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 00:07:08.962860   97943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 00:07:08.975314   97943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 00:07:09.086709   97943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 00:07:09.237022   97943 docker.go:233] disabling docker service ...
	I1210 00:07:09.237103   97943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 00:07:09.249910   97943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 00:07:09.261842   97943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 00:07:09.377487   97943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 00:07:09.489077   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 00:07:09.503310   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 00:07:09.520074   97943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 00:07:09.520146   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.529237   97943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 00:07:09.529299   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.538814   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.547790   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.557022   97943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 00:07:09.566274   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.575677   97943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.591166   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.600226   97943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 00:07:09.608899   97943 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 00:07:09.608959   97943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 00:07:09.621054   97943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 00:07:09.630324   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:07:09.745895   97943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 00:07:09.836812   97943 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 00:07:09.836886   97943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 00:07:09.841320   97943 start.go:563] Will wait 60s for crictl version
	I1210 00:07:09.841380   97943 ssh_runner.go:195] Run: which crictl
	I1210 00:07:09.845003   97943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:07:09.887045   97943 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 00:07:09.887158   97943 ssh_runner.go:195] Run: crio --version
	I1210 00:07:09.913628   97943 ssh_runner.go:195] Run: crio --version
	I1210 00:07:09.940544   97943 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 00:07:09.941808   97943 out.go:177]   - env NO_PROXY=192.168.39.187
	I1210 00:07:09.942959   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetIP
	I1210 00:07:09.945644   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:09.946026   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:09.946058   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:09.946322   97943 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 00:07:09.950215   97943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:07:09.961995   97943 mustload.go:65] Loading cluster: ha-070032
	I1210 00:07:09.962176   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:07:09.962427   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:07:09.962471   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:07:09.977140   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34015
	I1210 00:07:09.977521   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:07:09.978002   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:07:09.978024   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:07:09.978339   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:07:09.978526   97943 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:07:09.979937   97943 host.go:66] Checking if "ha-070032" exists ...
	I1210 00:07:09.980239   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:07:09.980281   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:07:09.994247   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36787
	I1210 00:07:09.994760   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:07:09.995248   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:07:09.995276   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:07:09.995617   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:07:09.995804   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:07:09.995981   97943 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032 for IP: 192.168.39.198
	I1210 00:07:09.995996   97943 certs.go:194] generating shared ca certs ...
	I1210 00:07:09.996013   97943 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:07:09.996181   97943 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 00:07:09.996237   97943 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 00:07:09.996250   97943 certs.go:256] generating profile certs ...
	I1210 00:07:09.996340   97943 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key
	I1210 00:07:09.996369   97943 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.f9753880
	I1210 00:07:09.996386   97943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.f9753880 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.187 192.168.39.198 192.168.39.254]
	I1210 00:07:10.076485   97943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.f9753880 ...
	I1210 00:07:10.076513   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.f9753880: {Name:mk063fa61de97dbebc815f8cdc0b8ad5f6ad42dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:07:10.076683   97943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.f9753880 ...
	I1210 00:07:10.076697   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.f9753880: {Name:mk6197070a633b3c7bff009f36273929319901d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:07:10.076768   97943 certs.go:381] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.f9753880 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt
	I1210 00:07:10.076894   97943 certs.go:385] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.f9753880 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key
	I1210 00:07:10.077019   97943 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key
	I1210 00:07:10.077036   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 00:07:10.077051   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 00:07:10.077064   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 00:07:10.077079   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 00:07:10.077092   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 00:07:10.077105   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 00:07:10.077118   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 00:07:10.077130   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 00:07:10.077177   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 00:07:10.077207   97943 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 00:07:10.077219   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:07:10.077240   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 00:07:10.077261   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:07:10.077283   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 00:07:10.077318   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:07:10.077343   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:07:10.077356   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem -> /usr/share/ca-certificates/86296.pem
	I1210 00:07:10.077368   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /usr/share/ca-certificates/862962.pem
	I1210 00:07:10.077402   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:07:10.080314   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:07:10.080656   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:07:10.080686   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:07:10.080849   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:07:10.081053   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:07:10.081213   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:07:10.081346   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:07:10.150955   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1210 00:07:10.156109   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1210 00:07:10.172000   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1210 00:07:10.175843   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1210 00:07:10.191569   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1210 00:07:10.195845   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1210 00:07:10.205344   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1210 00:07:10.208990   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1210 00:07:10.218513   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1210 00:07:10.222172   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1210 00:07:10.231444   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1210 00:07:10.235751   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1210 00:07:10.245673   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:07:10.268586   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 00:07:10.289301   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:07:10.309755   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:07:10.330372   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1210 00:07:10.350734   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 00:07:10.370944   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:07:10.391160   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:07:10.411354   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:07:10.431480   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 00:07:10.453051   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 00:07:10.473317   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1210 00:07:10.487731   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1210 00:07:10.501999   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1210 00:07:10.516876   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1210 00:07:10.531860   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1210 00:07:10.546723   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1210 00:07:10.561653   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1210 00:07:10.575903   97943 ssh_runner.go:195] Run: openssl version
	I1210 00:07:10.580966   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 00:07:10.590633   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 00:07:10.594516   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 00:07:10.594555   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 00:07:10.599765   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:07:10.609423   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:07:10.619123   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:07:10.623118   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:07:10.623159   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:07:10.628240   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:07:10.637834   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 00:07:10.647418   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 00:07:10.651160   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 00:07:10.651204   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 00:07:10.656233   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 00:07:10.666013   97943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:07:10.669458   97943 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 00:07:10.669508   97943 kubeadm.go:934] updating node {m02 192.168.39.198 8443 v1.31.2 crio true true} ...
	I1210 00:07:10.669598   97943 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-070032-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:07:10.669628   97943 kube-vip.go:115] generating kube-vip config ...
	I1210 00:07:10.669651   97943 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1210 00:07:10.689973   97943 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1210 00:07:10.690046   97943 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1210 00:07:10.690097   97943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:07:10.699806   97943 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1210 00:07:10.699859   97943 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1210 00:07:10.709208   97943 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1210 00:07:10.709234   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1210 00:07:10.709289   97943 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1210 00:07:10.709322   97943 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1210 00:07:10.709296   97943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1210 00:07:10.713239   97943 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1210 00:07:10.713260   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1210 00:07:11.639149   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1210 00:07:11.639234   97943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1210 00:07:11.643871   97943 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1210 00:07:11.643902   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1210 00:07:11.758059   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:07:11.787926   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1210 00:07:11.788041   97943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1210 00:07:11.795093   97943 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1210 00:07:11.795140   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1210 00:07:12.180780   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1210 00:07:12.189342   97943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1210 00:07:12.205977   97943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:07:12.220614   97943 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1210 00:07:12.235844   97943 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1210 00:07:12.239089   97943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:07:12.251338   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:07:12.381143   97943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:07:12.396098   97943 host.go:66] Checking if "ha-070032" exists ...
	I1210 00:07:12.396594   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:07:12.396651   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:07:12.412619   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39619
	I1210 00:07:12.413166   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:07:12.413744   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:07:12.413766   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:07:12.414184   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:07:12.414391   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:07:12.414627   97943 start.go:317] joinCluster: &{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:07:12.414728   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1210 00:07:12.414747   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:07:12.418002   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:07:12.418418   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:07:12.418450   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:07:12.418629   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:07:12.418810   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:07:12.418994   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:07:12.419164   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:07:12.570827   97943 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:07:12.570886   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tdi3w2.l01zdw261ipf0ila --discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-070032-m02 --control-plane --apiserver-advertise-address=192.168.39.198 --apiserver-bind-port=8443"
	I1210 00:07:32.921639   97943 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tdi3w2.l01zdw261ipf0ila --discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-070032-m02 --control-plane --apiserver-advertise-address=192.168.39.198 --apiserver-bind-port=8443": (20.350728679s)
	I1210 00:07:32.921682   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1210 00:07:33.411739   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-070032-m02 minikube.k8s.io/updated_at=2024_12_10T00_07_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=ha-070032 minikube.k8s.io/primary=false
	I1210 00:07:33.552589   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-070032-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1210 00:07:33.681991   97943 start.go:319] duration metric: took 21.26735926s to joinCluster
	I1210 00:07:33.682079   97943 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:07:33.682486   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:07:33.683556   97943 out.go:177] * Verifying Kubernetes components...
	I1210 00:07:33.684723   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:07:33.911972   97943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:07:33.951142   97943 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:07:33.951400   97943 kapi.go:59] client config for ha-070032: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.crt", KeyFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key", CAFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1210 00:07:33.951471   97943 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.187:8443
	I1210 00:07:33.951667   97943 node_ready.go:35] waiting up to 6m0s for node "ha-070032-m02" to be "Ready" ...
	I1210 00:07:33.951780   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:33.951788   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:33.951796   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:33.951800   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:33.961739   97943 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1210 00:07:34.452167   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:34.452198   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:34.452211   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:34.452219   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:34.456196   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:34.952070   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:34.952094   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:34.952105   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:34.952111   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:34.957522   97943 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1210 00:07:35.452860   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:35.452883   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:35.452890   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:35.452894   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:35.456005   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:35.952021   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:35.952048   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:35.952058   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:35.952063   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:35.955318   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:35.955854   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:36.452184   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:36.452211   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:36.452222   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:36.452229   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:36.455126   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:36.951926   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:36.951955   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:36.951966   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:36.951973   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:36.956909   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:07:37.452305   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:37.452330   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:37.452341   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:37.452348   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:37.458679   97943 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1210 00:07:37.952074   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:37.952096   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:37.952105   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:37.952111   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:37.954863   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:38.452953   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:38.452983   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:38.452996   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:38.453003   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:38.455946   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:38.456796   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:38.952594   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:38.952617   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:38.952626   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:38.952630   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:38.955438   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:39.452632   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:39.452657   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:39.452669   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:39.452675   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:39.455716   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:39.952848   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:39.952879   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:39.952893   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:39.952899   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:39.956221   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:40.452071   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:40.452095   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:40.452105   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:40.452112   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:40.455375   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:40.952464   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:40.952488   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:40.952507   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:40.952512   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:40.955445   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:40.956051   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:41.452509   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:41.452534   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:41.452542   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:41.452547   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:41.455649   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:41.952634   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:41.952657   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:41.952666   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:41.952669   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:41.955344   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:42.452001   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:42.452023   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:42.452032   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:42.452036   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:42.454753   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:42.952401   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:42.952423   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:42.952436   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:42.952440   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:42.955178   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:43.451951   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:43.451974   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:43.451982   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:43.451986   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:43.454333   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:43.454867   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:43.951938   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:43.951963   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:43.951973   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:43.951978   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:43.954971   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:44.452196   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:44.452218   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:44.452225   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:44.452230   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:44.455145   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:44.952295   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:44.952319   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:44.952327   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:44.952331   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:44.955347   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:45.452137   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:45.452165   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:45.452176   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:45.452181   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:45.477510   97943 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I1210 00:07:45.477938   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:45.952299   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:45.952324   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:45.952332   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:45.952335   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:45.955321   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:46.452358   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:46.452384   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:46.452393   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:46.452397   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:46.455541   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:46.952608   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:46.952634   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:46.952643   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:46.952647   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:46.957412   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:07:47.452449   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:47.452471   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:47.452480   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:47.452484   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:47.455610   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:47.952117   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:47.952140   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:47.952153   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:47.952158   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:47.955292   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:47.956098   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:48.452506   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:48.452532   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:48.452539   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:48.452543   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:48.455102   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:48.952221   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:48.952248   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:48.952258   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:48.952265   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:48.955311   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:49.452304   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:49.452327   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:49.452335   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:49.452340   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:49.455564   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:49.952482   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:49.952504   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:49.952512   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:49.952516   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:49.955476   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:50.452216   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:50.452240   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:50.452248   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:50.452252   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:50.455231   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:50.455908   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:50.952301   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:50.952323   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:50.952331   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:50.952335   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:50.955916   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:51.452010   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:51.452030   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.452039   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.452042   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.454528   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.455097   97943 node_ready.go:49] node "ha-070032-m02" has status "Ready":"True"
	I1210 00:07:51.455120   97943 node_ready.go:38] duration metric: took 17.50342824s for node "ha-070032-m02" to be "Ready" ...
	I1210 00:07:51.455132   97943 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:07:51.455240   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:07:51.455254   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.455263   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.455267   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.459208   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:51.466339   97943 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fs6l6" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.466409   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs6l6
	I1210 00:07:51.466417   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.466423   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.466427   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.469050   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.469653   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:51.469667   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.469674   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.469678   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.472023   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.472637   97943 pod_ready.go:93] pod "coredns-7c65d6cfc9-fs6l6" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:51.472656   97943 pod_ready.go:82] duration metric: took 6.295928ms for pod "coredns-7c65d6cfc9-fs6l6" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.472667   97943 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nqnhw" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.472740   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nqnhw
	I1210 00:07:51.472751   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.472759   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.472768   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.475075   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.475717   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:51.475733   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.475739   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.475743   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.477769   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.478274   97943 pod_ready.go:93] pod "coredns-7c65d6cfc9-nqnhw" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:51.478291   97943 pod_ready.go:82] duration metric: took 5.614539ms for pod "coredns-7c65d6cfc9-nqnhw" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.478301   97943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.478367   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/etcd-ha-070032
	I1210 00:07:51.478379   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.478388   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.478394   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.480522   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.481177   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:51.481192   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.481202   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.481209   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.483181   97943 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1210 00:07:51.483658   97943 pod_ready.go:93] pod "etcd-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:51.483673   97943 pod_ready.go:82] duration metric: took 5.36618ms for pod "etcd-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.483680   97943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.483721   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/etcd-ha-070032-m02
	I1210 00:07:51.483729   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.483736   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.483740   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.485816   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.486281   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:51.486294   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.486301   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.486305   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.488586   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.489007   97943 pod_ready.go:93] pod "etcd-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:51.489022   97943 pod_ready.go:82] duration metric: took 5.33676ms for pod "etcd-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.489033   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.652421   97943 request.go:632] Waited for 163.314648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032
	I1210 00:07:51.652507   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032
	I1210 00:07:51.652514   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.652522   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.652529   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.655875   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:51.852945   97943 request.go:632] Waited for 196.352422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:51.853007   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:51.853013   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.853021   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.853024   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.855755   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.856291   97943 pod_ready.go:93] pod "kube-apiserver-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:51.856309   97943 pod_ready.go:82] duration metric: took 367.27061ms for pod "kube-apiserver-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.856319   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:52.052337   97943 request.go:632] Waited for 195.923221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032-m02
	I1210 00:07:52.052427   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032-m02
	I1210 00:07:52.052445   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:52.052456   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:52.052464   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:52.055099   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:52.252077   97943 request.go:632] Waited for 196.296135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:52.252149   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:52.252156   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:52.252167   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:52.252174   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:52.255050   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:52.255574   97943 pod_ready.go:93] pod "kube-apiserver-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:52.255594   97943 pod_ready.go:82] duration metric: took 399.267887ms for pod "kube-apiserver-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:52.255606   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:52.452073   97943 request.go:632] Waited for 196.39546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032
	I1210 00:07:52.452157   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032
	I1210 00:07:52.452173   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:52.452186   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:52.452244   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:52.458811   97943 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1210 00:07:52.652632   97943 request.go:632] Waited for 193.214443ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:52.652697   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:52.652702   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:52.652711   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:52.652716   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:52.655373   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:52.655983   97943 pod_ready.go:93] pod "kube-controller-manager-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:52.656003   97943 pod_ready.go:82] duration metric: took 400.387415ms for pod "kube-controller-manager-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:52.656017   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:52.852497   97943 request.go:632] Waited for 196.400538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032-m02
	I1210 00:07:52.852597   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032-m02
	I1210 00:07:52.852602   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:52.852610   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:52.852615   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:52.855857   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:53.052833   97943 request.go:632] Waited for 196.298843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:53.052897   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:53.052903   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:53.052910   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:53.052914   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:53.055870   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:53.056472   97943 pod_ready.go:93] pod "kube-controller-manager-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:53.056497   97943 pod_ready.go:82] duration metric: took 400.471759ms for pod "kube-controller-manager-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:53.056510   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7fm88" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:53.252421   97943 request.go:632] Waited for 195.828491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7fm88
	I1210 00:07:53.252528   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7fm88
	I1210 00:07:53.252541   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:53.252551   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:53.252557   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:53.255434   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:53.452445   97943 request.go:632] Waited for 196.391925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:53.452546   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:53.452560   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:53.452570   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:53.452575   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:53.456118   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:53.456572   97943 pod_ready.go:93] pod "kube-proxy-7fm88" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:53.456590   97943 pod_ready.go:82] duration metric: took 400.071362ms for pod "kube-proxy-7fm88" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:53.456605   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xsxdp" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:53.652799   97943 request.go:632] Waited for 196.033566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsxdp
	I1210 00:07:53.652870   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsxdp
	I1210 00:07:53.652877   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:53.652889   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:53.652897   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:53.656566   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:53.852630   97943 request.go:632] Waited for 195.347256ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:53.852735   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:53.852743   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:53.852750   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:53.852754   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:53.856029   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:53.856560   97943 pod_ready.go:93] pod "kube-proxy-xsxdp" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:53.856580   97943 pod_ready.go:82] duration metric: took 399.967291ms for pod "kube-proxy-xsxdp" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:53.856593   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:54.052778   97943 request.go:632] Waited for 196.074454ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032
	I1210 00:07:54.052856   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032
	I1210 00:07:54.052864   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:54.052876   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:54.052886   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:54.056269   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:54.252099   97943 request.go:632] Waited for 195.297548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:54.252166   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:54.252172   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:54.252179   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:54.252194   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:54.256109   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:54.256828   97943 pod_ready.go:93] pod "kube-scheduler-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:54.256845   97943 pod_ready.go:82] duration metric: took 400.243574ms for pod "kube-scheduler-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:54.256855   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:54.452369   97943 request.go:632] Waited for 195.428155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032-m02
	I1210 00:07:54.452450   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032-m02
	I1210 00:07:54.452455   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:54.452462   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:54.452469   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:54.455694   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:54.652684   97943 request.go:632] Waited for 196.354028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:54.652789   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:54.652798   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:54.652807   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:54.652815   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:54.655871   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:54.656329   97943 pod_ready.go:93] pod "kube-scheduler-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:54.656346   97943 pod_ready.go:82] duration metric: took 399.484539ms for pod "kube-scheduler-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:54.656357   97943 pod_ready.go:39] duration metric: took 3.201198757s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:07:54.656372   97943 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:07:54.656424   97943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:07:54.671199   97943 api_server.go:72] duration metric: took 20.989077821s to wait for apiserver process to appear ...
	I1210 00:07:54.671227   97943 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:07:54.671247   97943 api_server.go:253] Checking apiserver healthz at https://192.168.39.187:8443/healthz ...
	I1210 00:07:54.675276   97943 api_server.go:279] https://192.168.39.187:8443/healthz returned 200:
	ok
	I1210 00:07:54.675337   97943 round_trippers.go:463] GET https://192.168.39.187:8443/version
	I1210 00:07:54.675341   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:54.675349   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:54.675356   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:54.676142   97943 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1210 00:07:54.676268   97943 api_server.go:141] control plane version: v1.31.2
	I1210 00:07:54.676284   97943 api_server.go:131] duration metric: took 5.052294ms to wait for apiserver health ...
	I1210 00:07:54.676295   97943 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:07:54.852698   97943 request.go:632] Waited for 176.309011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:07:54.852754   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:07:54.852758   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:54.852767   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:54.852774   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:54.857339   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:07:54.861880   97943 system_pods.go:59] 17 kube-system pods found
	I1210 00:07:54.861907   97943 system_pods.go:61] "coredns-7c65d6cfc9-fs6l6" [9a1cf8b4-0a76-41a1-930f-1574e31db324] Running
	I1210 00:07:54.861912   97943 system_pods.go:61] "coredns-7c65d6cfc9-nqnhw" [2c81e85b-ea31-43b6-9467-97ff40b0b4a0] Running
	I1210 00:07:54.861916   97943 system_pods.go:61] "etcd-ha-070032" [db08368f-b4a3-4d4b-8863-3a2bef1f832b] Running
	I1210 00:07:54.861920   97943 system_pods.go:61] "etcd-ha-070032-m02" [22d5e0f8-0395-40fe-b734-1718a89c251a] Running
	I1210 00:07:54.861952   97943 system_pods.go:61] "kindnet-69btk" [23838518-3372-48bc-986e-e4688e0963bb] Running
	I1210 00:07:54.861962   97943 system_pods.go:61] "kindnet-r97q9" [566672d9-989b-4337-84f4-bafd5c70755f] Running
	I1210 00:07:54.861965   97943 system_pods.go:61] "kube-apiserver-ha-070032" [e06e8916-31d6-4690-bc97-ed55126af827] Running
	I1210 00:07:54.861969   97943 system_pods.go:61] "kube-apiserver-ha-070032-m02" [b2c543a7-2d54-44a4-9369-3115629d79bb] Running
	I1210 00:07:54.861972   97943 system_pods.go:61] "kube-controller-manager-ha-070032" [d01a145f-816f-41ce-83db-c81713564109] Running
	I1210 00:07:54.861979   97943 system_pods.go:61] "kube-controller-manager-ha-070032-m02" [d8af77ba-8496-4383-a8a8-387110a2a026] Running
	I1210 00:07:54.861982   97943 system_pods.go:61] "kube-proxy-7fm88" [e935cde6-5a4b-4387-93a9-26ca701c54ac] Running
	I1210 00:07:54.861985   97943 system_pods.go:61] "kube-proxy-xsxdp" [9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5] Running
	I1210 00:07:54.861988   97943 system_pods.go:61] "kube-scheduler-ha-070032" [28986558-f062-4f8e-9fb1-413a22311d7d] Running
	I1210 00:07:54.861992   97943 system_pods.go:61] "kube-scheduler-ha-070032-m02" [0600e010-0e55-44d9-ac1f-1e62c0139dd1] Running
	I1210 00:07:54.861997   97943 system_pods.go:61] "kube-vip-ha-070032" [dcecf230-f635-42a1-8fd4-567e3409d086] Running
	I1210 00:07:54.862000   97943 system_pods.go:61] "kube-vip-ha-070032-m02" [af656c29-7151-4ef7-8fa6-91187358671e] Running
	I1210 00:07:54.862003   97943 system_pods.go:61] "storage-provisioner" [920be345-6e4a-425f-bcde-0e5463fd92b7] Running
	I1210 00:07:54.862009   97943 system_pods.go:74] duration metric: took 185.705934ms to wait for pod list to return data ...
	I1210 00:07:54.862019   97943 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:07:55.052828   97943 request.go:632] Waited for 190.716484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/default/serviceaccounts
	I1210 00:07:55.052905   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/default/serviceaccounts
	I1210 00:07:55.052910   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:55.052920   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:55.052925   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:55.056476   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:55.056707   97943 default_sa.go:45] found service account: "default"
	I1210 00:07:55.056722   97943 default_sa.go:55] duration metric: took 194.697141ms for default service account to be created ...
	I1210 00:07:55.056734   97943 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:07:55.252140   97943 request.go:632] Waited for 195.318975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:07:55.252222   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:07:55.252228   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:55.252235   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:55.252246   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:55.256177   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:55.260950   97943 system_pods.go:86] 17 kube-system pods found
	I1210 00:07:55.260986   97943 system_pods.go:89] "coredns-7c65d6cfc9-fs6l6" [9a1cf8b4-0a76-41a1-930f-1574e31db324] Running
	I1210 00:07:55.260993   97943 system_pods.go:89] "coredns-7c65d6cfc9-nqnhw" [2c81e85b-ea31-43b6-9467-97ff40b0b4a0] Running
	I1210 00:07:55.260998   97943 system_pods.go:89] "etcd-ha-070032" [db08368f-b4a3-4d4b-8863-3a2bef1f832b] Running
	I1210 00:07:55.261002   97943 system_pods.go:89] "etcd-ha-070032-m02" [22d5e0f8-0395-40fe-b734-1718a89c251a] Running
	I1210 00:07:55.261005   97943 system_pods.go:89] "kindnet-69btk" [23838518-3372-48bc-986e-e4688e0963bb] Running
	I1210 00:07:55.261009   97943 system_pods.go:89] "kindnet-r97q9" [566672d9-989b-4337-84f4-bafd5c70755f] Running
	I1210 00:07:55.261013   97943 system_pods.go:89] "kube-apiserver-ha-070032" [e06e8916-31d6-4690-bc97-ed55126af827] Running
	I1210 00:07:55.261017   97943 system_pods.go:89] "kube-apiserver-ha-070032-m02" [b2c543a7-2d54-44a4-9369-3115629d79bb] Running
	I1210 00:07:55.261021   97943 system_pods.go:89] "kube-controller-manager-ha-070032" [d01a145f-816f-41ce-83db-c81713564109] Running
	I1210 00:07:55.261025   97943 system_pods.go:89] "kube-controller-manager-ha-070032-m02" [d8af77ba-8496-4383-a8a8-387110a2a026] Running
	I1210 00:07:55.261028   97943 system_pods.go:89] "kube-proxy-7fm88" [e935cde6-5a4b-4387-93a9-26ca701c54ac] Running
	I1210 00:07:55.261032   97943 system_pods.go:89] "kube-proxy-xsxdp" [9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5] Running
	I1210 00:07:55.261035   97943 system_pods.go:89] "kube-scheduler-ha-070032" [28986558-f062-4f8e-9fb1-413a22311d7d] Running
	I1210 00:07:55.261038   97943 system_pods.go:89] "kube-scheduler-ha-070032-m02" [0600e010-0e55-44d9-ac1f-1e62c0139dd1] Running
	I1210 00:07:55.261041   97943 system_pods.go:89] "kube-vip-ha-070032" [dcecf230-f635-42a1-8fd4-567e3409d086] Running
	I1210 00:07:55.261044   97943 system_pods.go:89] "kube-vip-ha-070032-m02" [af656c29-7151-4ef7-8fa6-91187358671e] Running
	I1210 00:07:55.261047   97943 system_pods.go:89] "storage-provisioner" [920be345-6e4a-425f-bcde-0e5463fd92b7] Running
	I1210 00:07:55.261054   97943 system_pods.go:126] duration metric: took 204.311621ms to wait for k8s-apps to be running ...
	I1210 00:07:55.261063   97943 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:07:55.261104   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:07:55.274767   97943 system_svc.go:56] duration metric: took 13.694234ms WaitForService to wait for kubelet
	I1210 00:07:55.274800   97943 kubeadm.go:582] duration metric: took 21.592682957s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:07:55.274820   97943 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:07:55.452205   97943 request.go:632] Waited for 177.292861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes
	I1210 00:07:55.452266   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes
	I1210 00:07:55.452271   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:55.452278   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:55.452283   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:55.455802   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:55.456649   97943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:07:55.456674   97943 node_conditions.go:123] node cpu capacity is 2
	I1210 00:07:55.456687   97943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:07:55.456691   97943 node_conditions.go:123] node cpu capacity is 2
	I1210 00:07:55.456696   97943 node_conditions.go:105] duration metric: took 181.87045ms to run NodePressure ...
	I1210 00:07:55.456708   97943 start.go:241] waiting for startup goroutines ...
	I1210 00:07:55.456739   97943 start.go:255] writing updated cluster config ...
	I1210 00:07:55.458841   97943 out.go:201] 
	I1210 00:07:55.460254   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:07:55.460350   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:07:55.461990   97943 out.go:177] * Starting "ha-070032-m03" control-plane node in "ha-070032" cluster
	I1210 00:07:55.463162   97943 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:07:55.463187   97943 cache.go:56] Caching tarball of preloaded images
	I1210 00:07:55.463285   97943 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 00:07:55.463296   97943 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 00:07:55.463384   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:07:55.463555   97943 start.go:360] acquireMachinesLock for ha-070032-m03: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:07:55.463598   97943 start.go:364] duration metric: took 23.179µs to acquireMachinesLock for "ha-070032-m03"
	I1210 00:07:55.463615   97943 start.go:93] Provisioning new machine with config: &{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:07:55.463708   97943 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1210 00:07:55.465955   97943 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1210 00:07:55.466061   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:07:55.466099   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:07:55.482132   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
	I1210 00:07:55.482649   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:07:55.483189   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:07:55.483214   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:07:55.483546   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:07:55.483725   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetMachineName
	I1210 00:07:55.483847   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:07:55.483970   97943 start.go:159] libmachine.API.Create for "ha-070032" (driver="kvm2")
	I1210 00:07:55.484001   97943 client.go:168] LocalClient.Create starting
	I1210 00:07:55.484030   97943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem
	I1210 00:07:55.484063   97943 main.go:141] libmachine: Decoding PEM data...
	I1210 00:07:55.484076   97943 main.go:141] libmachine: Parsing certificate...
	I1210 00:07:55.484129   97943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem
	I1210 00:07:55.484150   97943 main.go:141] libmachine: Decoding PEM data...
	I1210 00:07:55.484160   97943 main.go:141] libmachine: Parsing certificate...
	I1210 00:07:55.484177   97943 main.go:141] libmachine: Running pre-create checks...
	I1210 00:07:55.484187   97943 main.go:141] libmachine: (ha-070032-m03) Calling .PreCreateCheck
	I1210 00:07:55.484346   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetConfigRaw
	I1210 00:07:55.484732   97943 main.go:141] libmachine: Creating machine...
	I1210 00:07:55.484749   97943 main.go:141] libmachine: (ha-070032-m03) Calling .Create
	I1210 00:07:55.484892   97943 main.go:141] libmachine: (ha-070032-m03) Creating KVM machine...
	I1210 00:07:55.486009   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found existing default KVM network
	I1210 00:07:55.486135   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found existing private KVM network mk-ha-070032
	I1210 00:07:55.486275   97943 main.go:141] libmachine: (ha-070032-m03) Setting up store path in /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03 ...
	I1210 00:07:55.486315   97943 main.go:141] libmachine: (ha-070032-m03) Building disk image from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1210 00:07:55.486369   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:55.486273   98753 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:07:55.486441   97943 main.go:141] libmachine: (ha-070032-m03) Downloading /home/jenkins/minikube-integration/20062-79135/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1210 00:07:55.750942   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:55.750806   98753 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa...
	I1210 00:07:55.823142   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:55.822993   98753 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/ha-070032-m03.rawdisk...
	I1210 00:07:55.823184   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Writing magic tar header
	I1210 00:07:55.823200   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Writing SSH key tar header
	I1210 00:07:55.823214   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:55.823115   98753 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03 ...
	I1210 00:07:55.823231   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03
	I1210 00:07:55.823252   97943 main.go:141] libmachine: (ha-070032-m03) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03 (perms=drwx------)
	I1210 00:07:55.823278   97943 main.go:141] libmachine: (ha-070032-m03) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines (perms=drwxr-xr-x)
	I1210 00:07:55.823337   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines
	I1210 00:07:55.823363   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:07:55.823375   97943 main.go:141] libmachine: (ha-070032-m03) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube (perms=drwxr-xr-x)
	I1210 00:07:55.823392   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135
	I1210 00:07:55.823405   97943 main.go:141] libmachine: (ha-070032-m03) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135 (perms=drwxrwxr-x)
	I1210 00:07:55.823415   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1210 00:07:55.823431   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home/jenkins
	I1210 00:07:55.823442   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home
	I1210 00:07:55.823456   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Skipping /home - not owner
	I1210 00:07:55.823471   97943 main.go:141] libmachine: (ha-070032-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 00:07:55.823488   97943 main.go:141] libmachine: (ha-070032-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 00:07:55.823501   97943 main.go:141] libmachine: (ha-070032-m03) Creating domain...
	I1210 00:07:55.824547   97943 main.go:141] libmachine: (ha-070032-m03) define libvirt domain using xml: 
	I1210 00:07:55.824562   97943 main.go:141] libmachine: (ha-070032-m03) <domain type='kvm'>
	I1210 00:07:55.824568   97943 main.go:141] libmachine: (ha-070032-m03)   <name>ha-070032-m03</name>
	I1210 00:07:55.824572   97943 main.go:141] libmachine: (ha-070032-m03)   <memory unit='MiB'>2200</memory>
	I1210 00:07:55.824578   97943 main.go:141] libmachine: (ha-070032-m03)   <vcpu>2</vcpu>
	I1210 00:07:55.824582   97943 main.go:141] libmachine: (ha-070032-m03)   <features>
	I1210 00:07:55.824588   97943 main.go:141] libmachine: (ha-070032-m03)     <acpi/>
	I1210 00:07:55.824594   97943 main.go:141] libmachine: (ha-070032-m03)     <apic/>
	I1210 00:07:55.824599   97943 main.go:141] libmachine: (ha-070032-m03)     <pae/>
	I1210 00:07:55.824605   97943 main.go:141] libmachine: (ha-070032-m03)     
	I1210 00:07:55.824615   97943 main.go:141] libmachine: (ha-070032-m03)   </features>
	I1210 00:07:55.824649   97943 main.go:141] libmachine: (ha-070032-m03)   <cpu mode='host-passthrough'>
	I1210 00:07:55.824662   97943 main.go:141] libmachine: (ha-070032-m03)   
	I1210 00:07:55.824670   97943 main.go:141] libmachine: (ha-070032-m03)   </cpu>
	I1210 00:07:55.824678   97943 main.go:141] libmachine: (ha-070032-m03)   <os>
	I1210 00:07:55.824685   97943 main.go:141] libmachine: (ha-070032-m03)     <type>hvm</type>
	I1210 00:07:55.824690   97943 main.go:141] libmachine: (ha-070032-m03)     <boot dev='cdrom'/>
	I1210 00:07:55.824697   97943 main.go:141] libmachine: (ha-070032-m03)     <boot dev='hd'/>
	I1210 00:07:55.824703   97943 main.go:141] libmachine: (ha-070032-m03)     <bootmenu enable='no'/>
	I1210 00:07:55.824709   97943 main.go:141] libmachine: (ha-070032-m03)   </os>
	I1210 00:07:55.824714   97943 main.go:141] libmachine: (ha-070032-m03)   <devices>
	I1210 00:07:55.824720   97943 main.go:141] libmachine: (ha-070032-m03)     <disk type='file' device='cdrom'>
	I1210 00:07:55.824728   97943 main.go:141] libmachine: (ha-070032-m03)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/boot2docker.iso'/>
	I1210 00:07:55.824735   97943 main.go:141] libmachine: (ha-070032-m03)       <target dev='hdc' bus='scsi'/>
	I1210 00:07:55.824740   97943 main.go:141] libmachine: (ha-070032-m03)       <readonly/>
	I1210 00:07:55.824746   97943 main.go:141] libmachine: (ha-070032-m03)     </disk>
	I1210 00:07:55.824753   97943 main.go:141] libmachine: (ha-070032-m03)     <disk type='file' device='disk'>
	I1210 00:07:55.824761   97943 main.go:141] libmachine: (ha-070032-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1210 00:07:55.824769   97943 main.go:141] libmachine: (ha-070032-m03)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/ha-070032-m03.rawdisk'/>
	I1210 00:07:55.824776   97943 main.go:141] libmachine: (ha-070032-m03)       <target dev='hda' bus='virtio'/>
	I1210 00:07:55.824780   97943 main.go:141] libmachine: (ha-070032-m03)     </disk>
	I1210 00:07:55.824787   97943 main.go:141] libmachine: (ha-070032-m03)     <interface type='network'>
	I1210 00:07:55.824793   97943 main.go:141] libmachine: (ha-070032-m03)       <source network='mk-ha-070032'/>
	I1210 00:07:55.824799   97943 main.go:141] libmachine: (ha-070032-m03)       <model type='virtio'/>
	I1210 00:07:55.824804   97943 main.go:141] libmachine: (ha-070032-m03)     </interface>
	I1210 00:07:55.824809   97943 main.go:141] libmachine: (ha-070032-m03)     <interface type='network'>
	I1210 00:07:55.824814   97943 main.go:141] libmachine: (ha-070032-m03)       <source network='default'/>
	I1210 00:07:55.824819   97943 main.go:141] libmachine: (ha-070032-m03)       <model type='virtio'/>
	I1210 00:07:55.824824   97943 main.go:141] libmachine: (ha-070032-m03)     </interface>
	I1210 00:07:55.824830   97943 main.go:141] libmachine: (ha-070032-m03)     <serial type='pty'>
	I1210 00:07:55.824835   97943 main.go:141] libmachine: (ha-070032-m03)       <target port='0'/>
	I1210 00:07:55.824842   97943 main.go:141] libmachine: (ha-070032-m03)     </serial>
	I1210 00:07:55.824846   97943 main.go:141] libmachine: (ha-070032-m03)     <console type='pty'>
	I1210 00:07:55.824852   97943 main.go:141] libmachine: (ha-070032-m03)       <target type='serial' port='0'/>
	I1210 00:07:55.824859   97943 main.go:141] libmachine: (ha-070032-m03)     </console>
	I1210 00:07:55.824863   97943 main.go:141] libmachine: (ha-070032-m03)     <rng model='virtio'>
	I1210 00:07:55.824871   97943 main.go:141] libmachine: (ha-070032-m03)       <backend model='random'>/dev/random</backend>
	I1210 00:07:55.824874   97943 main.go:141] libmachine: (ha-070032-m03)     </rng>
	I1210 00:07:55.824881   97943 main.go:141] libmachine: (ha-070032-m03)     
	I1210 00:07:55.824884   97943 main.go:141] libmachine: (ha-070032-m03)     
	I1210 00:07:55.824891   97943 main.go:141] libmachine: (ha-070032-m03)   </devices>
	I1210 00:07:55.824895   97943 main.go:141] libmachine: (ha-070032-m03) </domain>
	I1210 00:07:55.824901   97943 main.go:141] libmachine: (ha-070032-m03) 
	I1210 00:07:55.831443   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:5a:d9:d9 in network default
	I1210 00:07:55.832042   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:55.832057   97943 main.go:141] libmachine: (ha-070032-m03) Ensuring networks are active...
	I1210 00:07:55.832934   97943 main.go:141] libmachine: (ha-070032-m03) Ensuring network default is active
	I1210 00:07:55.833292   97943 main.go:141] libmachine: (ha-070032-m03) Ensuring network mk-ha-070032 is active
	I1210 00:07:55.833793   97943 main.go:141] libmachine: (ha-070032-m03) Getting domain xml...
	I1210 00:07:55.834538   97943 main.go:141] libmachine: (ha-070032-m03) Creating domain...
	I1210 00:07:57.048312   97943 main.go:141] libmachine: (ha-070032-m03) Waiting to get IP...
	I1210 00:07:57.049343   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:57.049867   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:57.049936   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:57.049857   98753 retry.go:31] will retry after 285.89703ms: waiting for machine to come up
	I1210 00:07:57.337509   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:57.337895   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:57.337921   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:57.337875   98753 retry.go:31] will retry after 339.218188ms: waiting for machine to come up
	I1210 00:07:57.678323   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:57.678856   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:57.678881   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:57.678806   98753 retry.go:31] will retry after 294.170833ms: waiting for machine to come up
	I1210 00:07:57.974134   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:57.974660   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:57.974681   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:57.974611   98753 retry.go:31] will retry after 408.745882ms: waiting for machine to come up
	I1210 00:07:58.385123   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:58.385636   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:58.385664   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:58.385591   98753 retry.go:31] will retry after 527.821664ms: waiting for machine to come up
	I1210 00:07:58.915568   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:58.916006   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:58.916035   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:58.915961   98753 retry.go:31] will retry after 925.585874ms: waiting for machine to come up
	I1210 00:07:59.843180   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:59.843652   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:59.843679   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:59.843610   98753 retry.go:31] will retry after 870.720245ms: waiting for machine to come up
	I1210 00:08:00.715984   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:00.716446   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:00.716472   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:00.716425   98753 retry.go:31] will retry after 1.331743311s: waiting for machine to come up
	I1210 00:08:02.049640   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:02.050041   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:02.050067   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:02.049985   98753 retry.go:31] will retry after 1.76199987s: waiting for machine to come up
	I1210 00:08:03.813933   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:03.814414   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:03.814439   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:03.814370   98753 retry.go:31] will retry after 1.980303699s: waiting for machine to come up
	I1210 00:08:05.796494   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:05.797056   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:05.797086   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:05.797021   98753 retry.go:31] will retry after 2.086128516s: waiting for machine to come up
	I1210 00:08:07.884316   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:07.884692   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:07.884721   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:07.884642   98753 retry.go:31] will retry after 2.780301455s: waiting for machine to come up
	I1210 00:08:10.666546   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:10.666965   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:10.666996   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:10.666924   98753 retry.go:31] will retry after 4.142573793s: waiting for machine to come up
	I1210 00:08:14.811574   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:14.811965   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:14.811989   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:14.811918   98753 retry.go:31] will retry after 5.321214881s: waiting for machine to come up
	I1210 00:08:20.135607   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.136014   97943 main.go:141] libmachine: (ha-070032-m03) Found IP for machine: 192.168.39.244
	I1210 00:08:20.136038   97943 main.go:141] libmachine: (ha-070032-m03) Reserving static IP address...
	I1210 00:08:20.136048   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has current primary IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.136451   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find host DHCP lease matching {name: "ha-070032-m03", mac: "52:54:00:36:e7:81", ip: "192.168.39.244"} in network mk-ha-070032
	I1210 00:08:20.209941   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Getting to WaitForSSH function...
	I1210 00:08:20.209976   97943 main.go:141] libmachine: (ha-070032-m03) Reserved static IP address: 192.168.39.244
	I1210 00:08:20.209989   97943 main.go:141] libmachine: (ha-070032-m03) Waiting for SSH to be available...
	I1210 00:08:20.212879   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.213267   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:minikube Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.213298   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.213460   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Using SSH client type: external
	I1210 00:08:20.213487   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa (-rw-------)
	I1210 00:08:20.213527   97943 main.go:141] libmachine: (ha-070032-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:08:20.213547   97943 main.go:141] libmachine: (ha-070032-m03) DBG | About to run SSH command:
	I1210 00:08:20.213584   97943 main.go:141] libmachine: (ha-070032-m03) DBG | exit 0
	I1210 00:08:20.342480   97943 main.go:141] libmachine: (ha-070032-m03) DBG | SSH cmd err, output: <nil>: 
	I1210 00:08:20.342791   97943 main.go:141] libmachine: (ha-070032-m03) KVM machine creation complete!
	I1210 00:08:20.343090   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetConfigRaw
	I1210 00:08:20.343678   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:20.343881   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:20.344092   97943 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1210 00:08:20.344125   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetState
	I1210 00:08:20.345413   97943 main.go:141] libmachine: Detecting operating system of created instance...
	I1210 00:08:20.345430   97943 main.go:141] libmachine: Waiting for SSH to be available...
	I1210 00:08:20.345437   97943 main.go:141] libmachine: Getting to WaitForSSH function...
	I1210 00:08:20.345450   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:20.347967   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.348355   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.348389   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.348481   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:20.348653   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.348776   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.348911   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:20.349041   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:08:20.349329   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1210 00:08:20.349348   97943 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1210 00:08:20.449562   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:08:20.449588   97943 main.go:141] libmachine: Detecting the provisioner...
	I1210 00:08:20.449598   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:20.452398   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.452785   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.452812   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.452941   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:20.453110   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.453240   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.453428   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:20.453598   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:08:20.453780   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1210 00:08:20.453798   97943 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1210 00:08:20.555272   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1210 00:08:20.555337   97943 main.go:141] libmachine: found compatible host: buildroot
	I1210 00:08:20.555348   97943 main.go:141] libmachine: Provisioning with buildroot...
	I1210 00:08:20.555362   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetMachineName
	I1210 00:08:20.555624   97943 buildroot.go:166] provisioning hostname "ha-070032-m03"
	I1210 00:08:20.555652   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetMachineName
	I1210 00:08:20.555844   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:20.558784   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.559157   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.559192   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.559357   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:20.559555   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.559716   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.559850   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:20.560050   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:08:20.560266   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1210 00:08:20.560285   97943 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-070032-m03 && echo "ha-070032-m03" | sudo tee /etc/hostname
	I1210 00:08:20.676771   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-070032-m03
	
	I1210 00:08:20.676807   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:20.679443   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.679776   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.679807   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.680006   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:20.680185   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.680359   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.680491   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:20.680620   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:08:20.680832   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1210 00:08:20.680847   97943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-070032-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-070032-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-070032-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 00:08:20.791291   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:08:20.791325   97943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 00:08:20.791341   97943 buildroot.go:174] setting up certificates
	I1210 00:08:20.791358   97943 provision.go:84] configureAuth start
	I1210 00:08:20.791370   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetMachineName
	I1210 00:08:20.791652   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetIP
	I1210 00:08:20.794419   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.794874   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.794902   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.795002   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:20.798177   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.798590   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.798619   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.798789   97943 provision.go:143] copyHostCerts
	I1210 00:08:20.798825   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:08:20.798862   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 00:08:20.798871   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:08:20.798934   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 00:08:20.799007   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:08:20.799025   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 00:08:20.799030   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:08:20.799053   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 00:08:20.799097   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:08:20.799112   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 00:08:20.799119   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:08:20.799140   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 00:08:20.799198   97943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.ha-070032-m03 san=[127.0.0.1 192.168.39.244 ha-070032-m03 localhost minikube]
	I1210 00:08:20.901770   97943 provision.go:177] copyRemoteCerts
	I1210 00:08:20.901829   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 00:08:20.901857   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:20.904479   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.904810   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.904842   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.904999   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:20.905202   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.905341   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:20.905465   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa Username:docker}
	I1210 00:08:20.987981   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 00:08:20.988061   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 00:08:21.011122   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 00:08:21.011186   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 00:08:21.033692   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 00:08:21.033754   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 00:08:21.056597   97943 provision.go:87] duration metric: took 265.223032ms to configureAuth
	I1210 00:08:21.056629   97943 buildroot.go:189] setting minikube options for container-runtime
	I1210 00:08:21.057591   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:08:21.057673   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:21.060831   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.061343   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.061378   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.061673   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:21.061904   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.062107   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.062269   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:21.062474   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:08:21.062700   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1210 00:08:21.062721   97943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 00:08:21.281273   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 00:08:21.281301   97943 main.go:141] libmachine: Checking connection to Docker...
	I1210 00:08:21.281310   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetURL
	I1210 00:08:21.282833   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Using libvirt version 6000000
	I1210 00:08:21.285219   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.285581   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.285613   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.285747   97943 main.go:141] libmachine: Docker is up and running!
	I1210 00:08:21.285761   97943 main.go:141] libmachine: Reticulating splines...
	I1210 00:08:21.285769   97943 client.go:171] duration metric: took 25.801757929s to LocalClient.Create
	I1210 00:08:21.285791   97943 start.go:167] duration metric: took 25.801831678s to libmachine.API.Create "ha-070032"
	I1210 00:08:21.285798   97943 start.go:293] postStartSetup for "ha-070032-m03" (driver="kvm2")
	I1210 00:08:21.285807   97943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 00:08:21.285828   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:21.286085   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 00:08:21.286117   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:21.288055   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.288329   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.288370   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.288480   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:21.288647   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.288777   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:21.288901   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa Username:docker}
	I1210 00:08:21.369391   97943 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 00:08:21.373285   97943 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 00:08:21.373310   97943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 00:08:21.373392   97943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 00:08:21.373503   97943 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 00:08:21.373518   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /etc/ssl/certs/862962.pem
	I1210 00:08:21.373639   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 00:08:21.382298   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:08:21.403806   97943 start.go:296] duration metric: took 117.996202ms for postStartSetup
	I1210 00:08:21.403863   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetConfigRaw
	I1210 00:08:21.404476   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetIP
	I1210 00:08:21.407162   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.407495   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.407517   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.407796   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:08:21.408029   97943 start.go:128] duration metric: took 25.944309943s to createHost
	I1210 00:08:21.408053   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:21.410158   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.410458   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.410486   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.410661   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:21.410839   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.411023   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.411142   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:21.411301   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:08:21.411462   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1210 00:08:21.411473   97943 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 00:08:21.514926   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733789301.493981402
	
	I1210 00:08:21.514949   97943 fix.go:216] guest clock: 1733789301.493981402
	I1210 00:08:21.514956   97943 fix.go:229] Guest: 2024-12-10 00:08:21.493981402 +0000 UTC Remote: 2024-12-10 00:08:21.408042688 +0000 UTC m=+148.654123328 (delta=85.938714ms)
	I1210 00:08:21.514972   97943 fix.go:200] guest clock delta is within tolerance: 85.938714ms
	I1210 00:08:21.514978   97943 start.go:83] releasing machines lock for "ha-070032-m03", held for 26.05137115s
	I1210 00:08:21.514997   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:21.515241   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetIP
	I1210 00:08:21.517912   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.518241   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.518261   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.520470   97943 out.go:177] * Found network options:
	I1210 00:08:21.521800   97943 out.go:177]   - NO_PROXY=192.168.39.187,192.168.39.198
	W1210 00:08:21.523143   97943 proxy.go:119] fail to check proxy env: Error ip not in block
	W1210 00:08:21.523168   97943 proxy.go:119] fail to check proxy env: Error ip not in block
	I1210 00:08:21.523188   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:21.523682   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:21.523924   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:21.524029   97943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 00:08:21.524084   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	W1210 00:08:21.524110   97943 proxy.go:119] fail to check proxy env: Error ip not in block
	W1210 00:08:21.524137   97943 proxy.go:119] fail to check proxy env: Error ip not in block
	I1210 00:08:21.524228   97943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 00:08:21.524251   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:21.527134   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.527403   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.527435   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.527461   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.527644   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:21.527864   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.527884   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.527885   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.528014   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:21.528094   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:21.528182   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.528256   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa Username:docker}
	I1210 00:08:21.528295   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:21.528396   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa Username:docker}
	I1210 00:08:21.759543   97943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 00:08:21.765842   97943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 00:08:21.765945   97943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 00:08:21.781497   97943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 00:08:21.781528   97943 start.go:495] detecting cgroup driver to use...
	I1210 00:08:21.781601   97943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 00:08:21.798260   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 00:08:21.812631   97943 docker.go:217] disabling cri-docker service (if available) ...
	I1210 00:08:21.812703   97943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 00:08:21.826291   97943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 00:08:21.839819   97943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 00:08:21.970011   97943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 00:08:22.106825   97943 docker.go:233] disabling docker service ...
	I1210 00:08:22.106898   97943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 00:08:22.120845   97943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 00:08:22.133078   97943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 00:08:22.277754   97943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 00:08:22.396135   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 00:08:22.410691   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 00:08:22.428016   97943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 00:08:22.428081   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.437432   97943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 00:08:22.437492   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.446807   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.457081   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.466785   97943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 00:08:22.476232   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.485876   97943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.501168   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.511414   97943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 00:08:22.520354   97943 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 00:08:22.520415   97943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 00:08:22.532412   97943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 00:08:22.541467   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:08:22.650142   97943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 00:08:22.739814   97943 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 00:08:22.739908   97943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 00:08:22.744756   97943 start.go:563] Will wait 60s for crictl version
	I1210 00:08:22.744820   97943 ssh_runner.go:195] Run: which crictl
	I1210 00:08:22.748420   97943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:08:22.786505   97943 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 00:08:22.786627   97943 ssh_runner.go:195] Run: crio --version
	I1210 00:08:22.812591   97943 ssh_runner.go:195] Run: crio --version
	I1210 00:08:22.840186   97943 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 00:08:22.841668   97943 out.go:177]   - env NO_PROXY=192.168.39.187
	I1210 00:08:22.842917   97943 out.go:177]   - env NO_PROXY=192.168.39.187,192.168.39.198
	I1210 00:08:22.843965   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetIP
	I1210 00:08:22.846623   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:22.847074   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:22.847104   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:22.847299   97943 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 00:08:22.851246   97943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:08:22.863976   97943 mustload.go:65] Loading cluster: ha-070032
	I1210 00:08:22.864213   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:08:22.864497   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:08:22.864537   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:08:22.879688   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33619
	I1210 00:08:22.880163   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:08:22.880674   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:08:22.880695   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:08:22.880999   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:08:22.881201   97943 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:08:22.882501   97943 host.go:66] Checking if "ha-070032" exists ...
	I1210 00:08:22.882829   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:08:22.882872   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:08:22.897175   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
	I1210 00:08:22.897634   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:08:22.898146   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:08:22.898164   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:08:22.898482   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:08:22.898668   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:08:22.898817   97943 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032 for IP: 192.168.39.244
	I1210 00:08:22.898832   97943 certs.go:194] generating shared ca certs ...
	I1210 00:08:22.898852   97943 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:08:22.899000   97943 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 00:08:22.899051   97943 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 00:08:22.899064   97943 certs.go:256] generating profile certs ...
	I1210 00:08:22.899170   97943 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key
	I1210 00:08:22.899201   97943 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.293befb8
	I1210 00:08:22.899223   97943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.293befb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.187 192.168.39.198 192.168.39.244 192.168.39.254]
	I1210 00:08:23.092450   97943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.293befb8 ...
	I1210 00:08:23.092478   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.293befb8: {Name:mk366065b18659314ca3f0bba1448963daaf0a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:08:23.092639   97943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.293befb8 ...
	I1210 00:08:23.092651   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.293befb8: {Name:mk5fa66078dcf45a83918146be6cef89d508f259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:08:23.092720   97943 certs.go:381] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.293befb8 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt
	I1210 00:08:23.092839   97943 certs.go:385] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.293befb8 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key
	I1210 00:08:23.092959   97943 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key
	I1210 00:08:23.092977   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 00:08:23.092992   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 00:08:23.093006   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 00:08:23.093017   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 00:08:23.093029   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 00:08:23.093041   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 00:08:23.093053   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 00:08:23.106669   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 00:08:23.106767   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 00:08:23.106812   97943 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 00:08:23.106826   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:08:23.106858   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 00:08:23.106887   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:08:23.106916   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 00:08:23.107014   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:08:23.107059   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:08:23.107078   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem -> /usr/share/ca-certificates/86296.pem
	I1210 00:08:23.107095   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /usr/share/ca-certificates/862962.pem
	I1210 00:08:23.107140   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:08:23.110428   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:08:23.110865   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:08:23.110897   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:08:23.111098   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:08:23.111299   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:08:23.111497   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:08:23.111654   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:08:23.182834   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1210 00:08:23.187460   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1210 00:08:23.201682   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1210 00:08:23.206212   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1210 00:08:23.216977   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1210 00:08:23.221040   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1210 00:08:23.231771   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1210 00:08:23.235936   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1210 00:08:23.245237   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1210 00:08:23.249225   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1210 00:08:23.259163   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1210 00:08:23.262970   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1210 00:08:23.272905   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:08:23.296036   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 00:08:23.319479   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:08:23.343697   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:08:23.365055   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1210 00:08:23.386745   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 00:08:23.408376   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:08:23.431761   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:08:23.453442   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:08:23.474461   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 00:08:23.496103   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 00:08:23.518047   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1210 00:08:23.533023   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1210 00:08:23.547698   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1210 00:08:23.563066   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1210 00:08:23.577579   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1210 00:08:23.592182   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1210 00:08:23.608125   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1210 00:08:23.627416   97943 ssh_runner.go:195] Run: openssl version
	I1210 00:08:23.632821   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:08:23.642458   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:08:23.646845   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:08:23.646909   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:08:23.652298   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:08:23.662442   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 00:08:23.672292   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 00:08:23.676158   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 00:08:23.676205   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 00:08:23.681586   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 00:08:23.691472   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 00:08:23.701487   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 00:08:23.705375   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 00:08:23.705413   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 00:08:23.710443   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:08:23.720294   97943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:08:23.723799   97943 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 00:08:23.723848   97943 kubeadm.go:934] updating node {m03 192.168.39.244 8443 v1.31.2 crio true true} ...
	I1210 00:08:23.723926   97943 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-070032-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:08:23.723949   97943 kube-vip.go:115] generating kube-vip config ...
	I1210 00:08:23.723977   97943 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1210 00:08:23.738685   97943 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1210 00:08:23.738750   97943 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1210 00:08:23.738796   97943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:08:23.747698   97943 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1210 00:08:23.747755   97943 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1210 00:08:23.756795   97943 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1210 00:08:23.756827   97943 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1210 00:08:23.756846   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:08:23.756856   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1210 00:08:23.756795   97943 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1210 00:08:23.756914   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1210 00:08:23.756945   97943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1210 00:08:23.756968   97943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1210 00:08:23.773755   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1210 00:08:23.773816   97943 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1210 00:08:23.773823   97943 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1210 00:08:23.773844   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1210 00:08:23.773877   97943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1210 00:08:23.773844   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1210 00:08:23.793177   97943 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1210 00:08:23.793213   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1210 00:08:24.557518   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1210 00:08:24.566776   97943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1210 00:08:24.582142   97943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:08:24.597144   97943 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1210 00:08:24.611549   97943 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1210 00:08:24.615055   97943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:08:24.625780   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:08:24.763770   97943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:08:24.783613   97943 host.go:66] Checking if "ha-070032" exists ...
	I1210 00:08:24.784058   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:08:24.784117   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:08:24.799970   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40537
	I1210 00:08:24.800574   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:08:24.801077   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:08:24.801104   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:08:24.801443   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:08:24.801614   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:08:24.801763   97943 start.go:317] joinCluster: &{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:08:24.801913   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1210 00:08:24.801952   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:08:24.804893   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:08:24.805288   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:08:24.805318   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:08:24.805470   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:08:24.805660   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:08:24.805792   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:08:24.805938   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:08:24.954369   97943 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:08:24.954415   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7o473f.weadhysgevqpchg6 --discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-070032-m03 --control-plane --apiserver-advertise-address=192.168.39.244 --apiserver-bind-port=8443"
	I1210 00:08:45.926879   97943 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7o473f.weadhysgevqpchg6 --discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-070032-m03 --control-plane --apiserver-advertise-address=192.168.39.244 --apiserver-bind-port=8443": (20.972431626s)
	I1210 00:08:45.926930   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1210 00:08:46.537890   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-070032-m03 minikube.k8s.io/updated_at=2024_12_10T00_08_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=ha-070032 minikube.k8s.io/primary=false
	I1210 00:08:46.678755   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-070032-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1210 00:08:46.787657   97943 start.go:319] duration metric: took 21.985888121s to joinCluster
	I1210 00:08:46.787759   97943 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:08:46.788166   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:08:46.789343   97943 out.go:177] * Verifying Kubernetes components...
	I1210 00:08:46.790511   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:08:47.024805   97943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:08:47.076330   97943 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:08:47.076598   97943 kapi.go:59] client config for ha-070032: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.crt", KeyFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key", CAFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1210 00:08:47.076672   97943 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.187:8443
	I1210 00:08:47.076938   97943 node_ready.go:35] waiting up to 6m0s for node "ha-070032-m03" to be "Ready" ...
	I1210 00:08:47.077046   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:47.077058   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:47.077068   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:47.077072   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:47.081152   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:08:47.577919   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:47.577942   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:47.577950   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:47.577954   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:47.581367   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:48.077920   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:48.077946   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:48.077954   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:48.077957   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:48.081478   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:48.578106   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:48.578131   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:48.578140   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:48.578145   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:48.581394   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:49.077995   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:49.078020   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:49.078028   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:49.078032   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:49.081191   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:49.081654   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:08:49.577520   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:49.577543   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:49.577568   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:49.577572   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:49.580973   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:50.077456   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:50.077483   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:50.077492   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:50.077497   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:50.083402   97943 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1210 00:08:50.577976   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:50.577999   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:50.578007   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:50.578010   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:50.580506   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:08:51.077330   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:51.077376   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:51.077386   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:51.077395   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:51.080649   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:51.577290   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:51.577326   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:51.577339   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:51.577349   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:51.580882   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:51.581750   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:08:52.077653   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:52.077675   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:52.077683   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:52.077687   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:52.080889   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:52.578159   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:52.578187   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:52.578198   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:52.578206   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:52.582757   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:08:53.078153   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:53.078177   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:53.078185   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:53.078189   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:53.081439   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:53.577299   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:53.577324   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:53.577333   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:53.577338   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:53.580510   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:54.077196   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:54.077220   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:54.077230   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:54.077236   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:54.083654   97943 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1210 00:08:54.084273   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:08:54.578076   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:54.578111   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:54.578119   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:54.578123   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:54.581723   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:55.077626   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:55.077648   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:55.077657   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:55.077660   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:55.081300   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:55.577841   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:55.577867   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:55.577877   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:55.577886   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:55.581081   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:56.078005   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:56.078027   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:56.078036   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:56.078039   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:56.081200   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:56.577743   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:56.577839   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:56.577862   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:56.577877   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:56.582190   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:08:56.583066   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:08:57.077440   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:57.077464   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:57.077472   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:57.077477   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:57.080605   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:57.577457   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:57.577484   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:57.577493   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:57.577503   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:57.580830   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:58.077293   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:58.077331   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:58.077344   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:58.077352   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:58.080511   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:58.577256   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:58.577282   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:58.577294   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:58.577299   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:58.580528   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:59.077895   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:59.077918   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:59.077926   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:59.077932   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:59.080996   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:59.081515   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:08:59.577418   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:59.577442   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:59.577450   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:59.577454   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:59.580861   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:00.077126   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:00.077149   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:00.077160   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:00.077166   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:00.080369   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:00.577334   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:00.577358   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:00.577369   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:00.577376   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:00.580424   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:01.077338   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:01.077364   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:01.077371   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:01.077375   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:01.080475   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:01.577333   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:01.577358   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:01.577371   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:01.577378   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:01.581002   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:01.581675   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:09:02.078158   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:02.078188   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:02.078197   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:02.078202   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:02.081520   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:02.577513   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:02.577534   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:02.577542   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:02.577548   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:02.580750   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:03.077225   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:03.077249   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:03.077258   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:03.077262   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:03.080188   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:03.577192   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:03.577225   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:03.577233   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:03.577238   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:03.579962   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:04.078167   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:04.078198   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:04.078207   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:04.078211   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:04.081272   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:04.081781   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:09:04.577794   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:04.577818   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:04.577826   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:04.577833   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:04.580810   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.077153   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:05.077175   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.077183   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.077189   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.080235   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:05.577566   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:05.577589   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.577597   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.577601   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.580616   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.581339   97943 node_ready.go:49] node "ha-070032-m03" has status "Ready":"True"
	I1210 00:09:05.581357   97943 node_ready.go:38] duration metric: took 18.504395192s for node "ha-070032-m03" to be "Ready" ...
	I1210 00:09:05.581372   97943 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:09:05.581447   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:09:05.581458   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.581465   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.581469   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.589597   97943 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1210 00:09:05.596462   97943 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fs6l6" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.596536   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs6l6
	I1210 00:09:05.596544   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.596551   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.596556   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.599226   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.599844   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:05.599860   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.599867   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.599871   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.602025   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.602633   97943 pod_ready.go:93] pod "coredns-7c65d6cfc9-fs6l6" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:05.602657   97943 pod_ready.go:82] duration metric: took 6.171823ms for pod "coredns-7c65d6cfc9-fs6l6" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.602669   97943 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nqnhw" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.602734   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nqnhw
	I1210 00:09:05.602745   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.602755   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.602759   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.605440   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.606129   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:05.606147   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.606157   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.606166   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.608461   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.608910   97943 pod_ready.go:93] pod "coredns-7c65d6cfc9-nqnhw" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:05.608928   97943 pod_ready.go:82] duration metric: took 6.250217ms for pod "coredns-7c65d6cfc9-nqnhw" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.608941   97943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.608999   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/etcd-ha-070032
	I1210 00:09:05.609009   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.609019   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.609029   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.611004   97943 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1210 00:09:05.611561   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:05.611577   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.611587   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.611591   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.613769   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.614248   97943 pod_ready.go:93] pod "etcd-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:05.614265   97943 pod_ready.go:82] duration metric: took 5.312355ms for pod "etcd-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.614275   97943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.614330   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/etcd-ha-070032-m02
	I1210 00:09:05.614341   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.614352   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.614362   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.616534   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.617151   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:05.617169   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.617188   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.617196   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.619058   97943 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1210 00:09:05.619439   97943 pod_ready.go:93] pod "etcd-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:05.619455   97943 pod_ready.go:82] duration metric: took 5.173011ms for pod "etcd-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.619463   97943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.777761   97943 request.go:632] Waited for 158.225465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/etcd-ha-070032-m03
	I1210 00:09:05.777859   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/etcd-ha-070032-m03
	I1210 00:09:05.777871   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.777881   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.777892   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.780968   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:05.978102   97943 request.go:632] Waited for 196.392006ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:05.978169   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:05.978176   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.978187   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.978209   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.981545   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:05.981978   97943 pod_ready.go:93] pod "etcd-ha-070032-m03" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:05.981997   97943 pod_ready.go:82] duration metric: took 362.528097ms for pod "etcd-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.982014   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:06.178303   97943 request.go:632] Waited for 196.186487ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032
	I1210 00:09:06.178366   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032
	I1210 00:09:06.178371   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:06.178384   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:06.178391   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:06.181153   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:06.378297   97943 request.go:632] Waited for 196.356871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:06.378357   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:06.378363   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:06.378371   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:06.378375   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:06.381593   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:06.382165   97943 pod_ready.go:93] pod "kube-apiserver-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:06.382184   97943 pod_ready.go:82] duration metric: took 400.160632ms for pod "kube-apiserver-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:06.382194   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:06.578291   97943 request.go:632] Waited for 195.993966ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032-m02
	I1210 00:09:06.578353   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032-m02
	I1210 00:09:06.578358   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:06.578366   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:06.578370   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:06.582418   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:09:06.777593   97943 request.go:632] Waited for 194.199077ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:06.777669   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:06.777674   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:06.777681   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:06.777686   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:06.780997   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:06.781681   97943 pod_ready.go:93] pod "kube-apiserver-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:06.781703   97943 pod_ready.go:82] duration metric: took 399.498231ms for pod "kube-apiserver-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:06.781713   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:06.977670   97943 request.go:632] Waited for 195.882184ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032-m03
	I1210 00:09:06.977738   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032-m03
	I1210 00:09:06.977758   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:06.977770   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:06.977778   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:06.981052   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:07.178250   97943 request.go:632] Waited for 196.370885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:07.178313   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:07.178319   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:07.178329   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:07.178338   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:07.182730   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:09:07.183284   97943 pod_ready.go:93] pod "kube-apiserver-ha-070032-m03" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:07.183306   97943 pod_ready.go:82] duration metric: took 401.586259ms for pod "kube-apiserver-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:07.183318   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:07.378237   97943 request.go:632] Waited for 194.824127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032
	I1210 00:09:07.378316   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032
	I1210 00:09:07.378322   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:07.378330   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:07.378333   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:07.382039   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:07.578085   97943 request.go:632] Waited for 195.402263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:07.578148   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:07.578154   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:07.578162   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:07.578166   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:07.581490   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:07.582147   97943 pod_ready.go:93] pod "kube-controller-manager-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:07.582169   97943 pod_ready.go:82] duration metric: took 398.840074ms for pod "kube-controller-manager-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:07.582184   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:07.778287   97943 request.go:632] Waited for 195.989005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032-m02
	I1210 00:09:07.778362   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032-m02
	I1210 00:09:07.778374   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:07.778386   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:07.778396   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:07.781669   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:07.978394   97943 request.go:632] Waited for 195.912192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:07.978479   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:07.978484   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:07.978492   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:07.978496   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:07.981759   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:07.982200   97943 pod_ready.go:93] pod "kube-controller-manager-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:07.982218   97943 pod_ready.go:82] duration metric: took 400.02698ms for pod "kube-controller-manager-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:07.982230   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:08.178354   97943 request.go:632] Waited for 196.04264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032-m03
	I1210 00:09:08.178439   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032-m03
	I1210 00:09:08.178449   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:08.178457   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:08.178466   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:08.181631   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:08.378597   97943 request.go:632] Waited for 196.366344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:08.378673   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:08.378683   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:08.378697   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:08.378707   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:08.384450   97943 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1210 00:09:08.385049   97943 pod_ready.go:93] pod "kube-controller-manager-ha-070032-m03" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:08.385078   97943 pod_ready.go:82] duration metric: took 402.840862ms for pod "kube-controller-manager-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:08.385096   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7fm88" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:08.577999   97943 request.go:632] Waited for 192.799851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7fm88
	I1210 00:09:08.578083   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7fm88
	I1210 00:09:08.578091   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:08.578100   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:08.578112   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:08.581292   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:08.777999   97943 request.go:632] Waited for 196.009017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:08.778080   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:08.778085   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:08.778093   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:08.778098   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:08.781007   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:08.781565   97943 pod_ready.go:93] pod "kube-proxy-7fm88" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:08.781586   97943 pod_ready.go:82] duration metric: took 396.482834ms for pod "kube-proxy-7fm88" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:08.781597   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bhnsm" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:08.978485   97943 request.go:632] Waited for 196.79193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bhnsm
	I1210 00:09:08.978550   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bhnsm
	I1210 00:09:08.978555   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:08.978577   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:08.978584   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:08.981555   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:09.178372   97943 request.go:632] Waited for 196.176512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:09.178445   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:09.178450   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:09.178457   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:09.178462   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:09.180718   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:09.181230   97943 pod_ready.go:93] pod "kube-proxy-bhnsm" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:09.181253   97943 pod_ready.go:82] duration metric: took 399.648229ms for pod "kube-proxy-bhnsm" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:09.181267   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xsxdp" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:09.378388   97943 request.go:632] Waited for 197.025674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsxdp
	I1210 00:09:09.378477   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsxdp
	I1210 00:09:09.378488   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:09.378497   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:09.378503   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:09.381425   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:09.578360   97943 request.go:632] Waited for 196.219183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:09.578421   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:09.578427   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:09.578435   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:09.578443   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:09.581280   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:09.581905   97943 pod_ready.go:93] pod "kube-proxy-xsxdp" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:09.581924   97943 pod_ready.go:82] duration metric: took 400.650321ms for pod "kube-proxy-xsxdp" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:09.581937   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:09.778061   97943 request.go:632] Waited for 196.052401ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032
	I1210 00:09:09.778128   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032
	I1210 00:09:09.778147   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:09.778155   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:09.778159   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:09.781448   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:09.978364   97943 request.go:632] Waited for 196.322768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:09.978428   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:09.978432   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:09.978441   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:09.978451   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:09.981730   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:09.982286   97943 pod_ready.go:93] pod "kube-scheduler-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:09.982308   97943 pod_ready.go:82] duration metric: took 400.362948ms for pod "kube-scheduler-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:09.982322   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:10.178076   97943 request.go:632] Waited for 195.65251ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032-m02
	I1210 00:09:10.178166   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032-m02
	I1210 00:09:10.178177   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:10.178190   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:10.178199   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:10.180876   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:10.377670   97943 request.go:632] Waited for 196.175118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:10.377736   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:10.377741   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:10.377751   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:10.377756   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:10.380801   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:10.381686   97943 pod_ready.go:93] pod "kube-scheduler-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:10.381707   97943 pod_ready.go:82] duration metric: took 399.375185ms for pod "kube-scheduler-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:10.381723   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:10.578151   97943 request.go:632] Waited for 196.332176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032-m03
	I1210 00:09:10.578230   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032-m03
	I1210 00:09:10.578239   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:10.578251   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:10.578259   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:10.581336   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:10.778384   97943 request.go:632] Waited for 196.388806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:10.778498   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:10.778512   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:10.778524   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:10.778534   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:10.781555   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:10.782190   97943 pod_ready.go:93] pod "kube-scheduler-ha-070032-m03" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:10.782213   97943 pod_ready.go:82] duration metric: took 400.482867ms for pod "kube-scheduler-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:10.782226   97943 pod_ready.go:39] duration metric: took 5.200841149s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:09:10.782243   97943 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:09:10.782306   97943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:09:10.798221   97943 api_server.go:72] duration metric: took 24.010410964s to wait for apiserver process to appear ...
	I1210 00:09:10.798252   97943 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:09:10.798277   97943 api_server.go:253] Checking apiserver healthz at https://192.168.39.187:8443/healthz ...
	I1210 00:09:10.802683   97943 api_server.go:279] https://192.168.39.187:8443/healthz returned 200:
	ok
	I1210 00:09:10.802763   97943 round_trippers.go:463] GET https://192.168.39.187:8443/version
	I1210 00:09:10.802775   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:10.802786   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:10.802791   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:10.803637   97943 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1210 00:09:10.803715   97943 api_server.go:141] control plane version: v1.31.2
	I1210 00:09:10.803733   97943 api_server.go:131] duration metric: took 5.473282ms to wait for apiserver health ...
	I1210 00:09:10.803747   97943 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:09:10.978074   97943 request.go:632] Waited for 174.240033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:09:10.978174   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:09:10.978188   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:10.978200   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:10.978210   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:10.984458   97943 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1210 00:09:10.990989   97943 system_pods.go:59] 24 kube-system pods found
	I1210 00:09:10.991013   97943 system_pods.go:61] "coredns-7c65d6cfc9-fs6l6" [9a1cf8b4-0a76-41a1-930f-1574e31db324] Running
	I1210 00:09:10.991018   97943 system_pods.go:61] "coredns-7c65d6cfc9-nqnhw" [2c81e85b-ea31-43b6-9467-97ff40b0b4a0] Running
	I1210 00:09:10.991022   97943 system_pods.go:61] "etcd-ha-070032" [db08368f-b4a3-4d4b-8863-3a2bef1f832b] Running
	I1210 00:09:10.991026   97943 system_pods.go:61] "etcd-ha-070032-m02" [22d5e0f8-0395-40fe-b734-1718a89c251a] Running
	I1210 00:09:10.991029   97943 system_pods.go:61] "etcd-ha-070032-m03" [ab936be4-5488-4dfc-a02a-d503eaf3ea02] Running
	I1210 00:09:10.991032   97943 system_pods.go:61] "kindnet-69btk" [23838518-3372-48bc-986e-e4688e0963bb] Running
	I1210 00:09:10.991034   97943 system_pods.go:61] "kindnet-gbrrg" [fe384e2f-f251-49d1-9b90-e73cddcd45e1] Running
	I1210 00:09:10.991037   97943 system_pods.go:61] "kindnet-r97q9" [566672d9-989b-4337-84f4-bafd5c70755f] Running
	I1210 00:09:10.991041   97943 system_pods.go:61] "kube-apiserver-ha-070032" [e06e8916-31d6-4690-bc97-ed55126af827] Running
	I1210 00:09:10.991044   97943 system_pods.go:61] "kube-apiserver-ha-070032-m02" [b2c543a7-2d54-44a4-9369-3115629d79bb] Running
	I1210 00:09:10.991047   97943 system_pods.go:61] "kube-apiserver-ha-070032-m03" [7d78ed28-bd45-49a7-bdd8-85d011048605] Running
	I1210 00:09:10.991050   97943 system_pods.go:61] "kube-controller-manager-ha-070032" [d01a145f-816f-41ce-83db-c81713564109] Running
	I1210 00:09:10.991054   97943 system_pods.go:61] "kube-controller-manager-ha-070032-m02" [d8af77ba-8496-4383-a8a8-387110a2a026] Running
	I1210 00:09:10.991057   97943 system_pods.go:61] "kube-controller-manager-ha-070032-m03" [f9860096-95b3-4911-b95f-22a2080afd02] Running
	I1210 00:09:10.991060   97943 system_pods.go:61] "kube-proxy-7fm88" [e935cde6-5a4b-4387-93a9-26ca701c54ac] Running
	I1210 00:09:10.991064   97943 system_pods.go:61] "kube-proxy-bhnsm" [b886bbdb-e0b7-4cb8-8e71-4b9d23993178] Running
	I1210 00:09:10.991068   97943 system_pods.go:61] "kube-proxy-xsxdp" [9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5] Running
	I1210 00:09:10.991074   97943 system_pods.go:61] "kube-scheduler-ha-070032" [28986558-f062-4f8e-9fb1-413a22311d7d] Running
	I1210 00:09:10.991078   97943 system_pods.go:61] "kube-scheduler-ha-070032-m02" [0600e010-0e55-44d9-ac1f-1e62c0139dd1] Running
	I1210 00:09:10.991081   97943 system_pods.go:61] "kube-scheduler-ha-070032-m03" [3b8eede7-a587-4561-9d46-ca58b43d7ebe] Running
	I1210 00:09:10.991084   97943 system_pods.go:61] "kube-vip-ha-070032" [dcecf230-f635-42a1-8fd4-567e3409d086] Running
	I1210 00:09:10.991087   97943 system_pods.go:61] "kube-vip-ha-070032-m02" [af656c29-7151-4ef7-8fa6-91187358671e] Running
	I1210 00:09:10.991090   97943 system_pods.go:61] "kube-vip-ha-070032-m03" [db7c389f-4b41-4fee-a43d-e89ef1455a1d] Running
	I1210 00:09:10.991095   97943 system_pods.go:61] "storage-provisioner" [920be345-6e4a-425f-bcde-0e5463fd92b7] Running
	I1210 00:09:10.991101   97943 system_pods.go:74] duration metric: took 187.346055ms to wait for pod list to return data ...
	I1210 00:09:10.991110   97943 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:09:11.178582   97943 request.go:632] Waited for 187.368121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/default/serviceaccounts
	I1210 00:09:11.178661   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/default/serviceaccounts
	I1210 00:09:11.178670   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:11.178681   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:11.178692   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:11.181792   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:11.181919   97943 default_sa.go:45] found service account: "default"
	I1210 00:09:11.181932   97943 default_sa.go:55] duration metric: took 190.816109ms for default service account to be created ...
	I1210 00:09:11.181940   97943 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:09:11.378264   97943 request.go:632] Waited for 196.227358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:09:11.378336   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:09:11.378344   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:11.378355   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:11.378365   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:11.383056   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:09:11.390160   97943 system_pods.go:86] 24 kube-system pods found
	I1210 00:09:11.390190   97943 system_pods.go:89] "coredns-7c65d6cfc9-fs6l6" [9a1cf8b4-0a76-41a1-930f-1574e31db324] Running
	I1210 00:09:11.390197   97943 system_pods.go:89] "coredns-7c65d6cfc9-nqnhw" [2c81e85b-ea31-43b6-9467-97ff40b0b4a0] Running
	I1210 00:09:11.390201   97943 system_pods.go:89] "etcd-ha-070032" [db08368f-b4a3-4d4b-8863-3a2bef1f832b] Running
	I1210 00:09:11.390207   97943 system_pods.go:89] "etcd-ha-070032-m02" [22d5e0f8-0395-40fe-b734-1718a89c251a] Running
	I1210 00:09:11.390211   97943 system_pods.go:89] "etcd-ha-070032-m03" [ab936be4-5488-4dfc-a02a-d503eaf3ea02] Running
	I1210 00:09:11.390215   97943 system_pods.go:89] "kindnet-69btk" [23838518-3372-48bc-986e-e4688e0963bb] Running
	I1210 00:09:11.390219   97943 system_pods.go:89] "kindnet-gbrrg" [fe384e2f-f251-49d1-9b90-e73cddcd45e1] Running
	I1210 00:09:11.390223   97943 system_pods.go:89] "kindnet-r97q9" [566672d9-989b-4337-84f4-bafd5c70755f] Running
	I1210 00:09:11.390227   97943 system_pods.go:89] "kube-apiserver-ha-070032" [e06e8916-31d6-4690-bc97-ed55126af827] Running
	I1210 00:09:11.390231   97943 system_pods.go:89] "kube-apiserver-ha-070032-m02" [b2c543a7-2d54-44a4-9369-3115629d79bb] Running
	I1210 00:09:11.390238   97943 system_pods.go:89] "kube-apiserver-ha-070032-m03" [7d78ed28-bd45-49a7-bdd8-85d011048605] Running
	I1210 00:09:11.390243   97943 system_pods.go:89] "kube-controller-manager-ha-070032" [d01a145f-816f-41ce-83db-c81713564109] Running
	I1210 00:09:11.390247   97943 system_pods.go:89] "kube-controller-manager-ha-070032-m02" [d8af77ba-8496-4383-a8a8-387110a2a026] Running
	I1210 00:09:11.390251   97943 system_pods.go:89] "kube-controller-manager-ha-070032-m03" [f9860096-95b3-4911-b95f-22a2080afd02] Running
	I1210 00:09:11.390256   97943 system_pods.go:89] "kube-proxy-7fm88" [e935cde6-5a4b-4387-93a9-26ca701c54ac] Running
	I1210 00:09:11.390259   97943 system_pods.go:89] "kube-proxy-bhnsm" [b886bbdb-e0b7-4cb8-8e71-4b9d23993178] Running
	I1210 00:09:11.390263   97943 system_pods.go:89] "kube-proxy-xsxdp" [9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5] Running
	I1210 00:09:11.390266   97943 system_pods.go:89] "kube-scheduler-ha-070032" [28986558-f062-4f8e-9fb1-413a22311d7d] Running
	I1210 00:09:11.390273   97943 system_pods.go:89] "kube-scheduler-ha-070032-m02" [0600e010-0e55-44d9-ac1f-1e62c0139dd1] Running
	I1210 00:09:11.390276   97943 system_pods.go:89] "kube-scheduler-ha-070032-m03" [3b8eede7-a587-4561-9d46-ca58b43d7ebe] Running
	I1210 00:09:11.390280   97943 system_pods.go:89] "kube-vip-ha-070032" [dcecf230-f635-42a1-8fd4-567e3409d086] Running
	I1210 00:09:11.390284   97943 system_pods.go:89] "kube-vip-ha-070032-m02" [af656c29-7151-4ef7-8fa6-91187358671e] Running
	I1210 00:09:11.390287   97943 system_pods.go:89] "kube-vip-ha-070032-m03" [db7c389f-4b41-4fee-a43d-e89ef1455a1d] Running
	I1210 00:09:11.390290   97943 system_pods.go:89] "storage-provisioner" [920be345-6e4a-425f-bcde-0e5463fd92b7] Running
	I1210 00:09:11.390298   97943 system_pods.go:126] duration metric: took 208.352897ms to wait for k8s-apps to be running ...
	I1210 00:09:11.390309   97943 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:09:11.390362   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:09:11.405439   97943 system_svc.go:56] duration metric: took 15.123283ms WaitForService to wait for kubelet
	I1210 00:09:11.405468   97943 kubeadm.go:582] duration metric: took 24.617672778s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:09:11.405491   97943 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:09:11.577957   97943 request.go:632] Waited for 172.358102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes
	I1210 00:09:11.578045   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes
	I1210 00:09:11.578061   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:11.578081   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:11.578091   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:11.582050   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:11.583133   97943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:09:11.583157   97943 node_conditions.go:123] node cpu capacity is 2
	I1210 00:09:11.583185   97943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:09:11.583189   97943 node_conditions.go:123] node cpu capacity is 2
	I1210 00:09:11.583193   97943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:09:11.583196   97943 node_conditions.go:123] node cpu capacity is 2
	I1210 00:09:11.583201   97943 node_conditions.go:105] duration metric: took 177.705427ms to run NodePressure ...
	I1210 00:09:11.583218   97943 start.go:241] waiting for startup goroutines ...
	I1210 00:09:11.583239   97943 start.go:255] writing updated cluster config ...
	I1210 00:09:11.583593   97943 ssh_runner.go:195] Run: rm -f paused
	I1210 00:09:11.635827   97943 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:09:11.638609   97943 out.go:177] * Done! kubectl is now configured to use "ha-070032" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 00:12:54 ha-070032 crio[662]: time="2024-12-10 00:12:54.484913748Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789574484895333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2990f879-b3ee-4be0-82d1-d637f63eb5ef name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:12:54 ha-070032 crio[662]: time="2024-12-10 00:12:54.485442644Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1b7147d-d9dc-4ddd-95e5-d448b281a950 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:54 ha-070032 crio[662]: time="2024-12-10 00:12:54.485509936Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1b7147d-d9dc-4ddd-95e5-d448b281a950 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:54 ha-070032 crio[662]: time="2024-12-10 00:12:54.487276830Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c6ab8dccd8ba997291d2b80d419b2b2e05a32373d0202dd8c719263ae30ccdb,PodSandboxId:e3f274c30a3959296a5c030d1ffa934b64c75f95bf0306039097cb7cf68b4fe4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733789355190103686,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d682h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 856c7b29-d84b-4688-8af1-c6cd60e5c948,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e305236942a6a79ef7f91e253be393cd58488d48aba3b5bf66c479acd0067bc8,PodSandboxId:5a85b4a79da52f1c29e7fcfa81fa0bedc1eb6bfd38fb444179c78f575d79d860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215377276438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqnhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e85b-ea31-43b6-9467-97ff40b0b4a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c2e334f3ec55e4be646775958650ae4186637afeca4998288d1fbd38037c8ea,PodSandboxId:f558795052a9d75e6b78b43096de277d42834212f92a6b90269650e35759d438,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215322355166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fs6l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9a1cf8b4-0a76-41a1-930f-1574e31db324,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bc6f0cc193d54e7a8a7dd22a38ca0e4d4f61bb51f322d3f41dac47db13c95b,PodSandboxId:3ad98b3ae6d227d0d652ff69ea4b3d54dd4f3e946d6480b75ce103fe7eaffa18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733789215263534966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920be345-6e4a-425f-bcde-0e5463fd92b7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c87cad753cfce5b05fd5987342e13dea86a36bc44f9c4b3a934dd48d2329af3,PodSandboxId:07cf68f38d235286c1e4b79576c7b2d5fd1fb3fd1b16b9f3f13cfe64eaed0c17,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733789203352008941,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r97q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566672d9-989b-4337-84f4-bafd5c70755f,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ce0ccc8b2285ae9861ca675ddee1c7cc4b2eb95f1fc9b3c252e8b7f70e57e2,PodSandboxId:f6e164f7d5dc23cc23d172647390aa33a976f115b0e620a3f1c8ceb1972b50c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733789199
811986108,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsxdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c832ea7354c3849fb453286649b272a5fcf355d47187efa465c13d6fc4d65dd,PodSandboxId:63415c4eed5c67ae421bad643372f730f2058ddc450abd01192ea02adf7fd1cd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378919154
1880372,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853eed8c10d2af858abe4fbec7a3d503,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ad93591d94d418f536035e0dbd58787d7e5c96ef2645619b7bf1fdd88df33c,PodSandboxId:974a006af9e0d5faa5a19f159a45e227208fee0ebf388fba700952547d1d2529,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733789188777226881,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecd4fc4633d25364aa5747e7113ed8c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cc792ca2c2098e4d3b71b355fb33ce358f1d7de4b2454c236712b6102bdaa06,PodSandboxId:94eb5ad94038f1074001a9da0f0356b4211451a99d8498e75f83f6896f01e753,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789188760652300,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2215b8ddb73d6ba5d32a52569092583d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1482c9caeda45e0518bea419e7de0b9d7ea7016563bc8769f4f33151afc52fca,PodSandboxId:2ae901f42d38831fd48a04c073ff0005559a868da016d574361a8a116dae1424,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733789188774438801,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b3080b6aff5134adaf044003845e8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06c286b00c118c3e50e8da5c902bb87ed0d31b055437ffb3e10bda025f3a64d,PodSandboxId:baf6b5fc008a937d1c54f73e03b660e8eddaec5406534b7f3a1ae0650baf4121,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733789188707109904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727141bf4ae4b442d005a5b0b8b6fdb9,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1b7147d-d9dc-4ddd-95e5-d448b281a950 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:54 ha-070032 crio[662]: time="2024-12-10 00:12:54.526299793Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6b8c8014-93eb-44c2-9f4e-87a5b858ff39 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:12:54 ha-070032 crio[662]: time="2024-12-10 00:12:54.526392817Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6b8c8014-93eb-44c2-9f4e-87a5b858ff39 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:12:54 ha-070032 crio[662]: time="2024-12-10 00:12:54.527504655Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d4443d1-003d-407b-9ac0-c1143f528b7c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:12:54 ha-070032 crio[662]: time="2024-12-10 00:12:54.527953561Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789574527934440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d4443d1-003d-407b-9ac0-c1143f528b7c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:12:54 ha-070032 crio[662]: time="2024-12-10 00:12:54.528443947Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54294ccd-9637-4a3e-82f1-0b320c47bfa0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:54 ha-070032 crio[662]: time="2024-12-10 00:12:54.528513235Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54294ccd-9637-4a3e-82f1-0b320c47bfa0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:54 ha-070032 crio[662]: time="2024-12-10 00:12:54.528864630Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c6ab8dccd8ba997291d2b80d419b2b2e05a32373d0202dd8c719263ae30ccdb,PodSandboxId:e3f274c30a3959296a5c030d1ffa934b64c75f95bf0306039097cb7cf68b4fe4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733789355190103686,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d682h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 856c7b29-d84b-4688-8af1-c6cd60e5c948,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e305236942a6a79ef7f91e253be393cd58488d48aba3b5bf66c479acd0067bc8,PodSandboxId:5a85b4a79da52f1c29e7fcfa81fa0bedc1eb6bfd38fb444179c78f575d79d860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215377276438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqnhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e85b-ea31-43b6-9467-97ff40b0b4a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c2e334f3ec55e4be646775958650ae4186637afeca4998288d1fbd38037c8ea,PodSandboxId:f558795052a9d75e6b78b43096de277d42834212f92a6b90269650e35759d438,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215322355166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fs6l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9a1cf8b4-0a76-41a1-930f-1574e31db324,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bc6f0cc193d54e7a8a7dd22a38ca0e4d4f61bb51f322d3f41dac47db13c95b,PodSandboxId:3ad98b3ae6d227d0d652ff69ea4b3d54dd4f3e946d6480b75ce103fe7eaffa18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733789215263534966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920be345-6e4a-425f-bcde-0e5463fd92b7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c87cad753cfce5b05fd5987342e13dea86a36bc44f9c4b3a934dd48d2329af3,PodSandboxId:07cf68f38d235286c1e4b79576c7b2d5fd1fb3fd1b16b9f3f13cfe64eaed0c17,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733789203352008941,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r97q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566672d9-989b-4337-84f4-bafd5c70755f,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ce0ccc8b2285ae9861ca675ddee1c7cc4b2eb95f1fc9b3c252e8b7f70e57e2,PodSandboxId:f6e164f7d5dc23cc23d172647390aa33a976f115b0e620a3f1c8ceb1972b50c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733789199
811986108,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsxdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c832ea7354c3849fb453286649b272a5fcf355d47187efa465c13d6fc4d65dd,PodSandboxId:63415c4eed5c67ae421bad643372f730f2058ddc450abd01192ea02adf7fd1cd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378919154
1880372,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853eed8c10d2af858abe4fbec7a3d503,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ad93591d94d418f536035e0dbd58787d7e5c96ef2645619b7bf1fdd88df33c,PodSandboxId:974a006af9e0d5faa5a19f159a45e227208fee0ebf388fba700952547d1d2529,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733789188777226881,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecd4fc4633d25364aa5747e7113ed8c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cc792ca2c2098e4d3b71b355fb33ce358f1d7de4b2454c236712b6102bdaa06,PodSandboxId:94eb5ad94038f1074001a9da0f0356b4211451a99d8498e75f83f6896f01e753,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789188760652300,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2215b8ddb73d6ba5d32a52569092583d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1482c9caeda45e0518bea419e7de0b9d7ea7016563bc8769f4f33151afc52fca,PodSandboxId:2ae901f42d38831fd48a04c073ff0005559a868da016d574361a8a116dae1424,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733789188774438801,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b3080b6aff5134adaf044003845e8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06c286b00c118c3e50e8da5c902bb87ed0d31b055437ffb3e10bda025f3a64d,PodSandboxId:baf6b5fc008a937d1c54f73e03b660e8eddaec5406534b7f3a1ae0650baf4121,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733789188707109904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727141bf4ae4b442d005a5b0b8b6fdb9,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54294ccd-9637-4a3e-82f1-0b320c47bfa0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:54 ha-070032 crio[662]: time="2024-12-10 00:12:54.564188364Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=364cb009-113c-49a1-8edc-b03255542938 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:12:54 ha-070032 crio[662]: time="2024-12-10 00:12:54.564266128Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=364cb009-113c-49a1-8edc-b03255542938 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:12:54 ha-070032 crio[662]: time="2024-12-10 00:12:54.565340168Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a5aa02ee-39b6-42b5-a221-93cb4b106da8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:12:54 ha-070032 crio[662]: time="2024-12-10 00:12:54.565833838Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789574565812490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5aa02ee-39b6-42b5-a221-93cb4b106da8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:12:54 ha-070032 crio[662]: time="2024-12-10 00:12:54.566368662Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a53e49d-ae94-409e-ad87-75c11a756d9e name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:54 ha-070032 crio[662]: time="2024-12-10 00:12:54.566436529Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a53e49d-ae94-409e-ad87-75c11a756d9e name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:54 ha-070032 crio[662]: time="2024-12-10 00:12:54.566791570Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c6ab8dccd8ba997291d2b80d419b2b2e05a32373d0202dd8c719263ae30ccdb,PodSandboxId:e3f274c30a3959296a5c030d1ffa934b64c75f95bf0306039097cb7cf68b4fe4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733789355190103686,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d682h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 856c7b29-d84b-4688-8af1-c6cd60e5c948,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e305236942a6a79ef7f91e253be393cd58488d48aba3b5bf66c479acd0067bc8,PodSandboxId:5a85b4a79da52f1c29e7fcfa81fa0bedc1eb6bfd38fb444179c78f575d79d860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215377276438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqnhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e85b-ea31-43b6-9467-97ff40b0b4a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c2e334f3ec55e4be646775958650ae4186637afeca4998288d1fbd38037c8ea,PodSandboxId:f558795052a9d75e6b78b43096de277d42834212f92a6b90269650e35759d438,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215322355166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fs6l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9a1cf8b4-0a76-41a1-930f-1574e31db324,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bc6f0cc193d54e7a8a7dd22a38ca0e4d4f61bb51f322d3f41dac47db13c95b,PodSandboxId:3ad98b3ae6d227d0d652ff69ea4b3d54dd4f3e946d6480b75ce103fe7eaffa18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733789215263534966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920be345-6e4a-425f-bcde-0e5463fd92b7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c87cad753cfce5b05fd5987342e13dea86a36bc44f9c4b3a934dd48d2329af3,PodSandboxId:07cf68f38d235286c1e4b79576c7b2d5fd1fb3fd1b16b9f3f13cfe64eaed0c17,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733789203352008941,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r97q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566672d9-989b-4337-84f4-bafd5c70755f,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ce0ccc8b2285ae9861ca675ddee1c7cc4b2eb95f1fc9b3c252e8b7f70e57e2,PodSandboxId:f6e164f7d5dc23cc23d172647390aa33a976f115b0e620a3f1c8ceb1972b50c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733789199
811986108,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsxdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c832ea7354c3849fb453286649b272a5fcf355d47187efa465c13d6fc4d65dd,PodSandboxId:63415c4eed5c67ae421bad643372f730f2058ddc450abd01192ea02adf7fd1cd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378919154
1880372,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853eed8c10d2af858abe4fbec7a3d503,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ad93591d94d418f536035e0dbd58787d7e5c96ef2645619b7bf1fdd88df33c,PodSandboxId:974a006af9e0d5faa5a19f159a45e227208fee0ebf388fba700952547d1d2529,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733789188777226881,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecd4fc4633d25364aa5747e7113ed8c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cc792ca2c2098e4d3b71b355fb33ce358f1d7de4b2454c236712b6102bdaa06,PodSandboxId:94eb5ad94038f1074001a9da0f0356b4211451a99d8498e75f83f6896f01e753,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789188760652300,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2215b8ddb73d6ba5d32a52569092583d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1482c9caeda45e0518bea419e7de0b9d7ea7016563bc8769f4f33151afc52fca,PodSandboxId:2ae901f42d38831fd48a04c073ff0005559a868da016d574361a8a116dae1424,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733789188774438801,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b3080b6aff5134adaf044003845e8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06c286b00c118c3e50e8da5c902bb87ed0d31b055437ffb3e10bda025f3a64d,PodSandboxId:baf6b5fc008a937d1c54f73e03b660e8eddaec5406534b7f3a1ae0650baf4121,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733789188707109904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727141bf4ae4b442d005a5b0b8b6fdb9,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a53e49d-ae94-409e-ad87-75c11a756d9e name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:54 ha-070032 crio[662]: time="2024-12-10 00:12:54.602655337Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9fea08e6-7e45-4bc4-a3aa-ac00f5ecebef name=/runtime.v1.RuntimeService/Version
	Dec 10 00:12:54 ha-070032 crio[662]: time="2024-12-10 00:12:54.602781006Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9fea08e6-7e45-4bc4-a3aa-ac00f5ecebef name=/runtime.v1.RuntimeService/Version
	Dec 10 00:12:54 ha-070032 crio[662]: time="2024-12-10 00:12:54.604170276Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=21c55491-2ebf-4a16-a1d7-449ecf82cc8a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:12:54 ha-070032 crio[662]: time="2024-12-10 00:12:54.604895026Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789574604873648,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21c55491-2ebf-4a16-a1d7-449ecf82cc8a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:12:54 ha-070032 crio[662]: time="2024-12-10 00:12:54.605622089Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4268961-fc7c-428b-9ebc-4efac6d7c375 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:54 ha-070032 crio[662]: time="2024-12-10 00:12:54.605670687Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4268961-fc7c-428b-9ebc-4efac6d7c375 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:12:54 ha-070032 crio[662]: time="2024-12-10 00:12:54.605974455Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c6ab8dccd8ba997291d2b80d419b2b2e05a32373d0202dd8c719263ae30ccdb,PodSandboxId:e3f274c30a3959296a5c030d1ffa934b64c75f95bf0306039097cb7cf68b4fe4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733789355190103686,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d682h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 856c7b29-d84b-4688-8af1-c6cd60e5c948,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e305236942a6a79ef7f91e253be393cd58488d48aba3b5bf66c479acd0067bc8,PodSandboxId:5a85b4a79da52f1c29e7fcfa81fa0bedc1eb6bfd38fb444179c78f575d79d860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215377276438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqnhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e85b-ea31-43b6-9467-97ff40b0b4a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c2e334f3ec55e4be646775958650ae4186637afeca4998288d1fbd38037c8ea,PodSandboxId:f558795052a9d75e6b78b43096de277d42834212f92a6b90269650e35759d438,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215322355166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fs6l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9a1cf8b4-0a76-41a1-930f-1574e31db324,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bc6f0cc193d54e7a8a7dd22a38ca0e4d4f61bb51f322d3f41dac47db13c95b,PodSandboxId:3ad98b3ae6d227d0d652ff69ea4b3d54dd4f3e946d6480b75ce103fe7eaffa18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733789215263534966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920be345-6e4a-425f-bcde-0e5463fd92b7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c87cad753cfce5b05fd5987342e13dea86a36bc44f9c4b3a934dd48d2329af3,PodSandboxId:07cf68f38d235286c1e4b79576c7b2d5fd1fb3fd1b16b9f3f13cfe64eaed0c17,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733789203352008941,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r97q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566672d9-989b-4337-84f4-bafd5c70755f,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ce0ccc8b2285ae9861ca675ddee1c7cc4b2eb95f1fc9b3c252e8b7f70e57e2,PodSandboxId:f6e164f7d5dc23cc23d172647390aa33a976f115b0e620a3f1c8ceb1972b50c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733789199
811986108,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsxdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c832ea7354c3849fb453286649b272a5fcf355d47187efa465c13d6fc4d65dd,PodSandboxId:63415c4eed5c67ae421bad643372f730f2058ddc450abd01192ea02adf7fd1cd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378919154
1880372,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853eed8c10d2af858abe4fbec7a3d503,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ad93591d94d418f536035e0dbd58787d7e5c96ef2645619b7bf1fdd88df33c,PodSandboxId:974a006af9e0d5faa5a19f159a45e227208fee0ebf388fba700952547d1d2529,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733789188777226881,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecd4fc4633d25364aa5747e7113ed8c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cc792ca2c2098e4d3b71b355fb33ce358f1d7de4b2454c236712b6102bdaa06,PodSandboxId:94eb5ad94038f1074001a9da0f0356b4211451a99d8498e75f83f6896f01e753,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789188760652300,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2215b8ddb73d6ba5d32a52569092583d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1482c9caeda45e0518bea419e7de0b9d7ea7016563bc8769f4f33151afc52fca,PodSandboxId:2ae901f42d38831fd48a04c073ff0005559a868da016d574361a8a116dae1424,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733789188774438801,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b3080b6aff5134adaf044003845e8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06c286b00c118c3e50e8da5c902bb87ed0d31b055437ffb3e10bda025f3a64d,PodSandboxId:baf6b5fc008a937d1c54f73e03b660e8eddaec5406534b7f3a1ae0650baf4121,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733789188707109904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727141bf4ae4b442d005a5b0b8b6fdb9,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4268961-fc7c-428b-9ebc-4efac6d7c375 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1c6ab8dccd8ba       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   e3f274c30a395       busybox-7dff88458-d682h
	e305236942a6a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   5a85b4a79da52       coredns-7c65d6cfc9-nqnhw
	7c2e334f3ec55       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   f558795052a9d       coredns-7c65d6cfc9-fs6l6
	a0bc6f0cc193d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   3ad98b3ae6d22       storage-provisioner
	4c87cad753cfc       docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3    6 minutes ago       Running             kindnet-cni               0                   07cf68f38d235       kindnet-r97q9
	d7ce0ccc8b228       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   f6e164f7d5dc2       kube-proxy-xsxdp
	2c832ea7354c3       ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517     6 minutes ago       Running             kube-vip                  0                   63415c4eed5c6       kube-vip-ha-070032
	a1ad93591d94d       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   974a006af9e0d       kube-apiserver-ha-070032
	1482c9caeda45       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   2ae901f42d388       kube-scheduler-ha-070032
	3cc792ca2c209       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   94eb5ad94038f       etcd-ha-070032
	d06c286b00c11       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   baf6b5fc008a9       kube-controller-manager-ha-070032
	
	
	==> coredns [7c2e334f3ec55e4be646775958650ae4186637afeca4998288d1fbd38037c8ea] <==
	[INFO] 10.244.3.2:46682 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001449431s
	[INFO] 10.244.1.2:58178 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186321s
	[INFO] 10.244.1.2:50380 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000193258s
	[INFO] 10.244.1.2:46652 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001618s
	[INFO] 10.244.1.2:57883 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003426883s
	[INFO] 10.244.0.4:59352 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009624s
	[INFO] 10.244.0.4:54543 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069497s
	[INFO] 10.244.0.4:53696 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011622s
	[INFO] 10.244.0.4:55436 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112389s
	[INFO] 10.244.3.2:43114 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001706864s
	[INFO] 10.244.3.2:56624 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088751s
	[INFO] 10.244.3.2:44513 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000074851s
	[INFO] 10.244.3.2:49956 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081755s
	[INFO] 10.244.1.2:40349 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000153721s
	[INFO] 10.244.0.4:44925 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128981s
	[INFO] 10.244.0.4:36252 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088006s
	[INFO] 10.244.0.4:39383 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070489s
	[INFO] 10.244.0.4:51627 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125996s
	[INFO] 10.244.3.2:46896 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118479s
	[INFO] 10.244.1.2:38261 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013128s
	[INFO] 10.244.1.2:58062 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000196774s
	[INFO] 10.244.0.4:47202 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140777s
	[INFO] 10.244.0.4:55399 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000091936s
	[INFO] 10.244.3.2:58172 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126998s
	[INFO] 10.244.3.2:58403 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000107335s
	
	
	==> coredns [e305236942a6a79ef7f91e253be393cd58488d48aba3b5bf66c479acd0067bc8] <==
	[INFO] 10.244.3.2:39118 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.049213372s
	[INFO] 10.244.1.2:47189 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002650171s
	[INFO] 10.244.1.2:60873 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149978s
	[INFO] 10.244.1.2:48109 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137629s
	[INFO] 10.244.1.2:49474 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113792s
	[INFO] 10.244.0.4:41643 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001681013s
	[INFO] 10.244.0.4:48048 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011923s
	[INFO] 10.244.0.4:35726 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000999387s
	[INFO] 10.244.0.4:41981 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00003888s
	[INFO] 10.244.3.2:42883 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156584s
	[INFO] 10.244.3.2:47597 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174459s
	[INFO] 10.244.3.2:52426 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001324612s
	[INFO] 10.244.3.2:51253 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071403s
	[INFO] 10.244.1.2:50492 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118518s
	[INFO] 10.244.1.2:49203 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108258s
	[INFO] 10.244.1.2:51348 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096375s
	[INFO] 10.244.3.2:42362 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000236533s
	[INFO] 10.244.3.2:60373 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010669s
	[INFO] 10.244.3.2:54648 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107013s
	[INFO] 10.244.1.2:49645 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000168571s
	[INFO] 10.244.1.2:37889 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000146602s
	[INFO] 10.244.0.4:44430 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098202s
	[INFO] 10.244.0.4:40310 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093003s
	[INFO] 10.244.3.2:55334 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000110256s
	[INFO] 10.244.3.2:41666 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108876s
	
	
	==> describe nodes <==
	Name:               ha-070032
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-070032
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=ha-070032
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_10T00_06_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:06:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-070032
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:12:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 00:09:38 +0000   Tue, 10 Dec 2024 00:06:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 00:09:38 +0000   Tue, 10 Dec 2024 00:06:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 00:09:38 +0000   Tue, 10 Dec 2024 00:06:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 00:09:38 +0000   Tue, 10 Dec 2024 00:06:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.187
	  Hostname:    ha-070032
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fb099128ff44c2a9726305ea6a63c95
	  System UUID:                8fb09912-8ff4-4c2a-9726-305ea6a63c95
	  Boot ID:                    72ec90c5-f76d-4c2b-9a52-435cb90236ed
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-d682h              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 coredns-7c65d6cfc9-fs6l6             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m15s
	  kube-system                 coredns-7c65d6cfc9-nqnhw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m15s
	  kube-system                 etcd-ha-070032                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m20s
	  kube-system                 kindnet-r97q9                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m15s
	  kube-system                 kube-apiserver-ha-070032             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-controller-manager-ha-070032    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-proxy-xsxdp                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-scheduler-ha-070032             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-vip-ha-070032                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m14s  kube-proxy       
	  Normal  Starting                 6m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m20s  kubelet          Node ha-070032 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m20s  kubelet          Node ha-070032 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m20s  kubelet          Node ha-070032 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m16s  node-controller  Node ha-070032 event: Registered Node ha-070032 in Controller
	  Normal  NodeReady                6m     kubelet          Node ha-070032 status is now: NodeReady
	  Normal  RegisteredNode           5m16s  node-controller  Node ha-070032 event: Registered Node ha-070032 in Controller
	  Normal  RegisteredNode           4m2s   node-controller  Node ha-070032 event: Registered Node ha-070032 in Controller
	
	
	Name:               ha-070032-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-070032-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=ha-070032
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_10T00_07_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:07:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-070032-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:10:24 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 10 Dec 2024 00:09:33 +0000   Tue, 10 Dec 2024 00:11:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 10 Dec 2024 00:09:33 +0000   Tue, 10 Dec 2024 00:11:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 10 Dec 2024 00:09:33 +0000   Tue, 10 Dec 2024 00:11:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 10 Dec 2024 00:09:33 +0000   Tue, 10 Dec 2024 00:11:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    ha-070032-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2c2b302d819044f8ad0494a0ee312d67
	  System UUID:                2c2b302d-8190-44f8-ad04-94a0ee312d67
	  Boot ID:                    b80c4e1c-4168-43bd-ac70-470e7e9703f3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7gbz8                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 etcd-ha-070032-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m22s
	  kube-system                 kindnet-69btk                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m24s
	  kube-system                 kube-apiserver-ha-070032-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-controller-manager-ha-070032-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 kube-proxy-7fm88                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-scheduler-ha-070032-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 kube-vip-ha-070032-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m20s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     5m24s                  cidrAllocator    Node ha-070032-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  5m24s (x8 over 5m24s)  kubelet          Node ha-070032-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m24s (x8 over 5m24s)  kubelet          Node ha-070032-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m24s (x7 over 5m24s)  kubelet          Node ha-070032-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m21s                  node-controller  Node ha-070032-m02 event: Registered Node ha-070032-m02 in Controller
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-070032-m02 event: Registered Node ha-070032-m02 in Controller
	  Normal  RegisteredNode           4m2s                   node-controller  Node ha-070032-m02 event: Registered Node ha-070032-m02 in Controller
	  Normal  NodeNotReady             107s                   node-controller  Node ha-070032-m02 status is now: NodeNotReady
	
	
	Name:               ha-070032-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-070032-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=ha-070032
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_10T00_08_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:08:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-070032-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:12:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 00:09:45 +0000   Tue, 10 Dec 2024 00:08:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 00:09:45 +0000   Tue, 10 Dec 2024 00:08:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 00:09:45 +0000   Tue, 10 Dec 2024 00:08:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 00:09:45 +0000   Tue, 10 Dec 2024 00:09:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.244
	  Hostname:    ha-070032-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7af7f783967c41bab4027928f3eb1ce2
	  System UUID:                7af7f783-967c-41ba-b402-7928f3eb1ce2
	  Boot ID:                    d7bca268-a1b9-47e2-900d-e8e3d560bcf4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-pw24w                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 etcd-ha-070032-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m9s
	  kube-system                 kindnet-gbrrg                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m11s
	  kube-system                 kube-apiserver-ha-070032-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-controller-manager-ha-070032-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-proxy-bhnsm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-scheduler-ha-070032-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-vip-ha-070032-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  CIDRAssignmentFailed     4m11s                  cidrAllocator    Node ha-070032-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m11s (x8 over 4m11s)  kubelet          Node ha-070032-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s (x8 over 4m11s)  kubelet          Node ha-070032-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s (x7 over 4m11s)  kubelet          Node ha-070032-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-070032-m03 event: Registered Node ha-070032-m03 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-070032-m03 event: Registered Node ha-070032-m03 in Controller
	  Normal  RegisteredNode           4m2s                   node-controller  Node ha-070032-m03 event: Registered Node ha-070032-m03 in Controller
	
	
	Name:               ha-070032-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-070032-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=ha-070032
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_10T00_09_50_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:09:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-070032-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:12:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 00:10:20 +0000   Tue, 10 Dec 2024 00:09:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 00:10:20 +0000   Tue, 10 Dec 2024 00:09:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 00:10:20 +0000   Tue, 10 Dec 2024 00:09:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 00:10:20 +0000   Tue, 10 Dec 2024 00:10:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.178
	  Hostname:    ha-070032-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1722ee99e8fc4ae7bbf0809a3824e471
	  System UUID:                1722ee99-e8fc-4ae7-bbf0-809a3824e471
	  Boot ID:                    4df30219-5a9e-41b4-adfb-6890ccd87aac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-knnxw       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m3s
	  kube-system                 kube-proxy-k8xs7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m                   kube-proxy       
	  Normal  CIDRAssignmentFailed     3m5s                 cidrAllocator    Node ha-070032-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m5s (x2 over 3m5s)  kubelet          Node ha-070032-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m5s (x2 over 3m5s)  kubelet          Node ha-070032-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m5s (x2 over 3m5s)  kubelet          Node ha-070032-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-070032-m04 event: Registered Node ha-070032-m04 in Controller
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-070032-m04 event: Registered Node ha-070032-m04 in Controller
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-070032-m04 event: Registered Node ha-070032-m04 in Controller
	  Normal  NodeReady                2m45s                kubelet          Node ha-070032-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec10 00:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052638] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037715] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Dec10 00:06] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.906851] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.611346] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.711169] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.053296] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050206] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.175256] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.129791] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.262857] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +3.716566] systemd-fstab-generator[748]: Ignoring "noauto" option for root device
	[  +4.745437] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.059727] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.033385] systemd-fstab-generator[1302]: Ignoring "noauto" option for root device
	[  +0.073983] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.636013] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.381804] kauditd_printk_skb: 38 callbacks suppressed
	[Dec10 00:07] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [3cc792ca2c2098e4d3b71b355fb33ce358f1d7de4b2454c236712b6102bdaa06] <==
	{"level":"warn","ts":"2024-12-10T00:12:54.713268Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:54.813418Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:54.817375Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:54.840418Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:54.847979Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:54.851533Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:54.859572Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:54.865539Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:54.871596Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:54.874371Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:54.876951Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:54.881852Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:54.887795Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:54.893734Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:54.896684Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:54.899088Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:54.907625Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:54.913727Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:54.913852Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:54.919650Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:54.922450Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:54.924944Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:54.927958Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:54.933817Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:12:54.939678Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:12:55 up 6 min,  0 users,  load average: 0.23, 0.29, 0.15
	Linux ha-070032 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4c87cad753cfce5b05fd5987342e13dea86a36bc44f9c4b3a934dd48d2329af3] <==
	I1210 00:12:24.367477       1 main.go:324] Node ha-070032-m04 has CIDR [10.244.4.0/24] 
	I1210 00:12:34.364895       1 main.go:297] Handling node with IPs: map[192.168.39.178:{}]
	I1210 00:12:34.364970       1 main.go:324] Node ha-070032-m04 has CIDR [10.244.4.0/24] 
	I1210 00:12:34.365169       1 main.go:297] Handling node with IPs: map[192.168.39.187:{}]
	I1210 00:12:34.365177       1 main.go:301] handling current node
	I1210 00:12:34.365200       1 main.go:297] Handling node with IPs: map[192.168.39.198:{}]
	I1210 00:12:34.365204       1 main.go:324] Node ha-070032-m02 has CIDR [10.244.1.0/24] 
	I1210 00:12:34.365319       1 main.go:297] Handling node with IPs: map[192.168.39.244:{}]
	I1210 00:12:34.365324       1 main.go:324] Node ha-070032-m03 has CIDR [10.244.3.0/24] 
	I1210 00:12:44.361278       1 main.go:297] Handling node with IPs: map[192.168.39.187:{}]
	I1210 00:12:44.361407       1 main.go:301] handling current node
	I1210 00:12:44.361435       1 main.go:297] Handling node with IPs: map[192.168.39.198:{}]
	I1210 00:12:44.361453       1 main.go:324] Node ha-070032-m02 has CIDR [10.244.1.0/24] 
	I1210 00:12:44.361686       1 main.go:297] Handling node with IPs: map[192.168.39.244:{}]
	I1210 00:12:44.361767       1 main.go:324] Node ha-070032-m03 has CIDR [10.244.3.0/24] 
	I1210 00:12:44.361952       1 main.go:297] Handling node with IPs: map[192.168.39.178:{}]
	I1210 00:12:44.361977       1 main.go:324] Node ha-070032-m04 has CIDR [10.244.4.0/24] 
	I1210 00:12:54.368862       1 main.go:297] Handling node with IPs: map[192.168.39.187:{}]
	I1210 00:12:54.368987       1 main.go:301] handling current node
	I1210 00:12:54.369042       1 main.go:297] Handling node with IPs: map[192.168.39.198:{}]
	I1210 00:12:54.369048       1 main.go:324] Node ha-070032-m02 has CIDR [10.244.1.0/24] 
	I1210 00:12:54.369300       1 main.go:297] Handling node with IPs: map[192.168.39.244:{}]
	I1210 00:12:54.369307       1 main.go:324] Node ha-070032-m03 has CIDR [10.244.3.0/24] 
	I1210 00:12:54.369408       1 main.go:297] Handling node with IPs: map[192.168.39.178:{}]
	I1210 00:12:54.369414       1 main.go:324] Node ha-070032-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [a1ad93591d94d418f536035e0dbd58787d7e5c96ef2645619b7bf1fdd88df33c] <==
	W1210 00:06:33.327544       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.187]
	I1210 00:06:33.328436       1 controller.go:615] quota admission added evaluator for: endpoints
	I1210 00:06:33.332351       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 00:06:33.644177       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1210 00:06:34.401030       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1210 00:06:34.426254       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1210 00:06:34.437836       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1210 00:06:39.341658       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1210 00:06:39.388665       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1210 00:09:16.643347       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53112: use of closed network connection
	E1210 00:09:16.826908       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53130: use of closed network connection
	E1210 00:09:17.054445       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53146: use of closed network connection
	E1210 00:09:17.230406       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53174: use of closed network connection
	E1210 00:09:17.395919       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53190: use of closed network connection
	E1210 00:09:17.578908       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53210: use of closed network connection
	E1210 00:09:17.752762       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53234: use of closed network connection
	E1210 00:09:17.924915       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53246: use of closed network connection
	E1210 00:09:18.096320       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53250: use of closed network connection
	E1210 00:09:18.374453       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53288: use of closed network connection
	E1210 00:09:18.551219       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53308: use of closed network connection
	E1210 00:09:18.715487       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53328: use of closed network connection
	E1210 00:09:18.882307       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53350: use of closed network connection
	E1210 00:09:19.053232       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53360: use of closed network connection
	E1210 00:09:19.219127       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53388: use of closed network connection
	W1210 00:10:43.338652       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.187 192.168.39.244]
	
	
	==> kube-controller-manager [d06c286b00c118c3e50e8da5c902bb87ed0d31b055437ffb3e10bda025f3a64d] <==
	I1210 00:09:49.805217       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-070032-m04" podCIDRs=["10.244.4.0/24"]
	I1210 00:09:49.805335       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:49.805501       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:49.830568       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:50.055099       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:50.429393       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:52.233446       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:53.527465       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:53.529595       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-070032-m04"
	I1210 00:09:53.635341       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:53.748163       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:53.769858       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:10:00.115956       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:10:09.020321       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:10:09.021003       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-070032-m04"
	I1210 00:10:09.036523       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:10:12.188838       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:10:20.604295       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:11:07.214303       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-070032-m04"
	I1210 00:11:07.214659       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m02"
	I1210 00:11:07.239149       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m02"
	I1210 00:11:07.332434       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.113905ms"
	I1210 00:11:07.332808       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="177.2µs"
	I1210 00:11:08.619804       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m02"
	I1210 00:11:12.462357       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m02"
	
	
	==> kube-proxy [d7ce0ccc8b2285ae9861ca675ddee1c7cc4b2eb95f1fc9b3c252e8b7f70e57e2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1210 00:06:40.034153       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1210 00:06:40.050742       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.187"]
	E1210 00:06:40.050886       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 00:06:40.097328       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1210 00:06:40.097397       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 00:06:40.097429       1 server_linux.go:169] "Using iptables Proxier"
	I1210 00:06:40.099955       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 00:06:40.100221       1 server.go:483] "Version info" version="v1.31.2"
	I1210 00:06:40.100242       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 00:06:40.102079       1 config.go:199] "Starting service config controller"
	I1210 00:06:40.102108       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1210 00:06:40.102130       1 config.go:105] "Starting endpoint slice config controller"
	I1210 00:06:40.102134       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1210 00:06:40.103442       1 config.go:328] "Starting node config controller"
	I1210 00:06:40.103468       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1210 00:06:40.203097       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1210 00:06:40.203185       1 shared_informer.go:320] Caches are synced for service config
	I1210 00:06:40.203635       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1482c9caeda45e0518bea419e7de0b9d7ea7016563bc8769f4f33151afc52fca] <==
	W1210 00:06:32.612869       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1210 00:06:32.612911       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1210 00:06:32.694127       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1210 00:06:32.694210       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 00:06:32.728214       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1210 00:06:32.728261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1210 00:06:32.890681       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1210 00:06:32.890785       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:06:32.906571       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1210 00:06:32.906947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:06:33.046474       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1210 00:06:33.046616       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1210 00:06:36.200867       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1210 00:09:49.873453       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-r2tf6\": pod kube-proxy-r2tf6 is already assigned to node \"ha-070032-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-r2tf6" node="ha-070032-m04"
	E1210 00:09:49.876571       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-r2tf6\": pod kube-proxy-r2tf6 is already assigned to node \"ha-070032-m04\"" pod="kube-system/kube-proxy-r2tf6"
	I1210 00:09:49.878867       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-r2tf6" node="ha-070032-m04"
	E1210 00:09:49.879144       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-v5wzl\": pod kindnet-v5wzl is already assigned to node \"ha-070032-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-v5wzl" node="ha-070032-m04"
	E1210 00:09:49.879364       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-v5wzl\": pod kindnet-v5wzl is already assigned to node \"ha-070032-m04\"" pod="kube-system/kindnet-v5wzl"
	I1210 00:09:49.879740       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-v5wzl" node="ha-070032-m04"
	E1210 00:09:49.938476       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-j8rtf\": pod kindnet-j8rtf is already assigned to node \"ha-070032-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-j8rtf" node="ha-070032-m04"
	E1210 00:09:49.939506       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-j8rtf\": pod kindnet-j8rtf is already assigned to node \"ha-070032-m04\"" pod="kube-system/kindnet-j8rtf"
	E1210 00:09:51.707755       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-nqxxb\": pod kindnet-nqxxb is already assigned to node \"ha-070032-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-nqxxb" node="ha-070032-m04"
	E1210 00:09:51.707858       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f925375b-3698-422b-a607-5a92ae55da32(kube-system/kindnet-nqxxb) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-nqxxb"
	E1210 00:09:51.707911       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-nqxxb\": pod kindnet-nqxxb is already assigned to node \"ha-070032-m04\"" pod="kube-system/kindnet-nqxxb"
	I1210 00:09:51.707964       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-nqxxb" node="ha-070032-m04"
	
	
	==> kubelet <==
	Dec 10 00:11:34 ha-070032 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 10 00:11:34 ha-070032 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 10 00:11:34 ha-070032 kubelet[1308]: E1210 00:11:34.426250    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789494424141935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:34 ha-070032 kubelet[1308]: E1210 00:11:34.426301    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789494424141935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:44 ha-070032 kubelet[1308]: E1210 00:11:44.428969    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789504427653710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:44 ha-070032 kubelet[1308]: E1210 00:11:44.429023    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789504427653710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:54 ha-070032 kubelet[1308]: E1210 00:11:54.430352    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789514430120521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:54 ha-070032 kubelet[1308]: E1210 00:11:54.430374    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789514430120521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:04 ha-070032 kubelet[1308]: E1210 00:12:04.432645    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789524431673132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:04 ha-070032 kubelet[1308]: E1210 00:12:04.432732    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789524431673132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:14 ha-070032 kubelet[1308]: E1210 00:12:14.434466    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789534434193110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:14 ha-070032 kubelet[1308]: E1210 00:12:14.434800    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789534434193110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:24 ha-070032 kubelet[1308]: E1210 00:12:24.436591    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789544436265231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:24 ha-070032 kubelet[1308]: E1210 00:12:24.436615    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789544436265231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:34 ha-070032 kubelet[1308]: E1210 00:12:34.323013    1308 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 10 00:12:34 ha-070032 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 10 00:12:34 ha-070032 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 10 00:12:34 ha-070032 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 10 00:12:34 ha-070032 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 10 00:12:34 ha-070032 kubelet[1308]: E1210 00:12:34.438072    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789554437642598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:34 ha-070032 kubelet[1308]: E1210 00:12:34.438102    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789554437642598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:44 ha-070032 kubelet[1308]: E1210 00:12:44.439455    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789564439127012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:44 ha-070032 kubelet[1308]: E1210 00:12:44.439836    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789564439127012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:54 ha-070032 kubelet[1308]: E1210 00:12:54.441399    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789574440681046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:54 ha-070032 kubelet[1308]: E1210 00:12:54.441436    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789574440681046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-070032 -n ha-070032
helpers_test.go:261: (dbg) Run:  kubectl --context ha-070032 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-070032 status -v=7 --alsologtostderr: (4.092978526s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-070032 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-070032 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-070032 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-070032 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-070032 -n ha-070032
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-070032 logs -n 25: (1.206232141s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m03:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032:/home/docker/cp-test_ha-070032-m03_ha-070032.txt                       |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032 sudo cat                                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m03_ha-070032.txt                                 |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m03:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m02:/home/docker/cp-test_ha-070032-m03_ha-070032-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032-m02 sudo cat                                          | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m03_ha-070032-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m03:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04:/home/docker/cp-test_ha-070032-m03_ha-070032-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032-m04 sudo cat                                          | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m03_ha-070032-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-070032 cp testdata/cp-test.txt                                                | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3617736836/001/cp-test_ha-070032-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032:/home/docker/cp-test_ha-070032-m04_ha-070032.txt                       |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032 sudo cat                                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m04_ha-070032.txt                                 |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m02:/home/docker/cp-test_ha-070032-m04_ha-070032-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032-m02 sudo cat                                          | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m04_ha-070032-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m03:/home/docker/cp-test_ha-070032-m04_ha-070032-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032-m03 sudo cat                                          | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m04_ha-070032-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-070032 node stop m02 -v=7                                                     | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-070032 node start m02 -v=7                                                    | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/10 00:05:52
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 00:05:52.791526   97943 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:05:52.791657   97943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:05:52.791669   97943 out.go:358] Setting ErrFile to fd 2...
	I1210 00:05:52.791677   97943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:05:52.791857   97943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 00:05:52.792405   97943 out.go:352] Setting JSON to false
	I1210 00:05:52.793229   97943 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6504,"bootTime":1733782649,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:05:52.793329   97943 start.go:139] virtualization: kvm guest
	I1210 00:05:52.796124   97943 out.go:177] * [ha-070032] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 00:05:52.797192   97943 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 00:05:52.797225   97943 notify.go:220] Checking for updates...
	I1210 00:05:52.799407   97943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:05:52.800504   97943 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:05:52.801675   97943 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:05:52.802744   97943 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:05:52.803783   97943 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:05:52.805109   97943 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:05:52.839813   97943 out.go:177] * Using the kvm2 driver based on user configuration
	I1210 00:05:52.840958   97943 start.go:297] selected driver: kvm2
	I1210 00:05:52.841009   97943 start.go:901] validating driver "kvm2" against <nil>
	I1210 00:05:52.841037   97943 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:05:52.841764   97943 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:05:52.841862   97943 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 00:05:52.856053   97943 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1210 00:05:52.856105   97943 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1210 00:05:52.856343   97943 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:05:52.856388   97943 cni.go:84] Creating CNI manager for ""
	I1210 00:05:52.856439   97943 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1210 00:05:52.856451   97943 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 00:05:52.856513   97943 start.go:340] cluster config:
	{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1210 00:05:52.856629   97943 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:05:52.858290   97943 out.go:177] * Starting "ha-070032" primary control-plane node in "ha-070032" cluster
	I1210 00:05:52.859441   97943 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:05:52.859486   97943 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1210 00:05:52.859496   97943 cache.go:56] Caching tarball of preloaded images
	I1210 00:05:52.859571   97943 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 00:05:52.859584   97943 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 00:05:52.859883   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:05:52.859904   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json: {Name:mke01e2b75d6b946a14cfa49d40b8237b928645a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:05:52.860050   97943 start.go:360] acquireMachinesLock for ha-070032: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:05:52.860091   97943 start.go:364] duration metric: took 24.816µs to acquireMachinesLock for "ha-070032"
	I1210 00:05:52.860115   97943 start.go:93] Provisioning new machine with config: &{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:05:52.860185   97943 start.go:125] createHost starting for "" (driver="kvm2")
	I1210 00:05:52.862431   97943 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1210 00:05:52.862625   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:05:52.862674   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:52.876494   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44133
	I1210 00:05:52.876866   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:52.877406   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:05:52.877428   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:52.877772   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:52.877940   97943 main.go:141] libmachine: (ha-070032) Calling .GetMachineName
	I1210 00:05:52.878106   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:05:52.878243   97943 start.go:159] libmachine.API.Create for "ha-070032" (driver="kvm2")
	I1210 00:05:52.878282   97943 client.go:168] LocalClient.Create starting
	I1210 00:05:52.878351   97943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem
	I1210 00:05:52.878400   97943 main.go:141] libmachine: Decoding PEM data...
	I1210 00:05:52.878419   97943 main.go:141] libmachine: Parsing certificate...
	I1210 00:05:52.878472   97943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem
	I1210 00:05:52.878494   97943 main.go:141] libmachine: Decoding PEM data...
	I1210 00:05:52.878509   97943 main.go:141] libmachine: Parsing certificate...
	I1210 00:05:52.878535   97943 main.go:141] libmachine: Running pre-create checks...
	I1210 00:05:52.878545   97943 main.go:141] libmachine: (ha-070032) Calling .PreCreateCheck
	I1210 00:05:52.878920   97943 main.go:141] libmachine: (ha-070032) Calling .GetConfigRaw
	I1210 00:05:52.879333   97943 main.go:141] libmachine: Creating machine...
	I1210 00:05:52.879348   97943 main.go:141] libmachine: (ha-070032) Calling .Create
	I1210 00:05:52.879474   97943 main.go:141] libmachine: (ha-070032) Creating KVM machine...
	I1210 00:05:52.880541   97943 main.go:141] libmachine: (ha-070032) DBG | found existing default KVM network
	I1210 00:05:52.881177   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:52.881049   97966 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a30}
	I1210 00:05:52.881198   97943 main.go:141] libmachine: (ha-070032) DBG | created network xml: 
	I1210 00:05:52.881212   97943 main.go:141] libmachine: (ha-070032) DBG | <network>
	I1210 00:05:52.881222   97943 main.go:141] libmachine: (ha-070032) DBG |   <name>mk-ha-070032</name>
	I1210 00:05:52.881231   97943 main.go:141] libmachine: (ha-070032) DBG |   <dns enable='no'/>
	I1210 00:05:52.881237   97943 main.go:141] libmachine: (ha-070032) DBG |   
	I1210 00:05:52.881250   97943 main.go:141] libmachine: (ha-070032) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1210 00:05:52.881265   97943 main.go:141] libmachine: (ha-070032) DBG |     <dhcp>
	I1210 00:05:52.881279   97943 main.go:141] libmachine: (ha-070032) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1210 00:05:52.881290   97943 main.go:141] libmachine: (ha-070032) DBG |     </dhcp>
	I1210 00:05:52.881301   97943 main.go:141] libmachine: (ha-070032) DBG |   </ip>
	I1210 00:05:52.881310   97943 main.go:141] libmachine: (ha-070032) DBG |   
	I1210 00:05:52.881318   97943 main.go:141] libmachine: (ha-070032) DBG | </network>
	I1210 00:05:52.881328   97943 main.go:141] libmachine: (ha-070032) DBG | 
	I1210 00:05:52.886258   97943 main.go:141] libmachine: (ha-070032) DBG | trying to create private KVM network mk-ha-070032 192.168.39.0/24...
	I1210 00:05:52.950347   97943 main.go:141] libmachine: (ha-070032) Setting up store path in /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032 ...
	I1210 00:05:52.950384   97943 main.go:141] libmachine: (ha-070032) DBG | private KVM network mk-ha-070032 192.168.39.0/24 created
	I1210 00:05:52.950396   97943 main.go:141] libmachine: (ha-070032) Building disk image from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1210 00:05:52.950439   97943 main.go:141] libmachine: (ha-070032) Downloading /home/jenkins/minikube-integration/20062-79135/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1210 00:05:52.950463   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:52.950265   97966 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:05:53.225909   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:53.225784   97966 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa...
	I1210 00:05:53.325235   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:53.325112   97966 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/ha-070032.rawdisk...
	I1210 00:05:53.325266   97943 main.go:141] libmachine: (ha-070032) DBG | Writing magic tar header
	I1210 00:05:53.325288   97943 main.go:141] libmachine: (ha-070032) DBG | Writing SSH key tar header
	I1210 00:05:53.325300   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:53.325244   97966 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032 ...
	I1210 00:05:53.325369   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032
	I1210 00:05:53.325394   97943 main.go:141] libmachine: (ha-070032) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032 (perms=drwx------)
	I1210 00:05:53.325428   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines
	I1210 00:05:53.325447   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:05:53.325560   97943 main.go:141] libmachine: (ha-070032) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines (perms=drwxr-xr-x)
	I1210 00:05:53.325599   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135
	I1210 00:05:53.325634   97943 main.go:141] libmachine: (ha-070032) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube (perms=drwxr-xr-x)
	I1210 00:05:53.325659   97943 main.go:141] libmachine: (ha-070032) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135 (perms=drwxrwxr-x)
	I1210 00:05:53.325669   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1210 00:05:53.325681   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home/jenkins
	I1210 00:05:53.325695   97943 main.go:141] libmachine: (ha-070032) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 00:05:53.325703   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home
	I1210 00:05:53.325715   97943 main.go:141] libmachine: (ha-070032) DBG | Skipping /home - not owner
	I1210 00:05:53.325747   97943 main.go:141] libmachine: (ha-070032) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 00:05:53.325763   97943 main.go:141] libmachine: (ha-070032) Creating domain...
	I1210 00:05:53.326682   97943 main.go:141] libmachine: (ha-070032) define libvirt domain using xml: 
	I1210 00:05:53.326699   97943 main.go:141] libmachine: (ha-070032) <domain type='kvm'>
	I1210 00:05:53.326705   97943 main.go:141] libmachine: (ha-070032)   <name>ha-070032</name>
	I1210 00:05:53.326709   97943 main.go:141] libmachine: (ha-070032)   <memory unit='MiB'>2200</memory>
	I1210 00:05:53.326714   97943 main.go:141] libmachine: (ha-070032)   <vcpu>2</vcpu>
	I1210 00:05:53.326718   97943 main.go:141] libmachine: (ha-070032)   <features>
	I1210 00:05:53.326744   97943 main.go:141] libmachine: (ha-070032)     <acpi/>
	I1210 00:05:53.326772   97943 main.go:141] libmachine: (ha-070032)     <apic/>
	I1210 00:05:53.326783   97943 main.go:141] libmachine: (ha-070032)     <pae/>
	I1210 00:05:53.326806   97943 main.go:141] libmachine: (ha-070032)     
	I1210 00:05:53.326826   97943 main.go:141] libmachine: (ha-070032)   </features>
	I1210 00:05:53.326854   97943 main.go:141] libmachine: (ha-070032)   <cpu mode='host-passthrough'>
	I1210 00:05:53.326865   97943 main.go:141] libmachine: (ha-070032)   
	I1210 00:05:53.326872   97943 main.go:141] libmachine: (ha-070032)   </cpu>
	I1210 00:05:53.326882   97943 main.go:141] libmachine: (ha-070032)   <os>
	I1210 00:05:53.326889   97943 main.go:141] libmachine: (ha-070032)     <type>hvm</type>
	I1210 00:05:53.326900   97943 main.go:141] libmachine: (ha-070032)     <boot dev='cdrom'/>
	I1210 00:05:53.326906   97943 main.go:141] libmachine: (ha-070032)     <boot dev='hd'/>
	I1210 00:05:53.326920   97943 main.go:141] libmachine: (ha-070032)     <bootmenu enable='no'/>
	I1210 00:05:53.326944   97943 main.go:141] libmachine: (ha-070032)   </os>
	I1210 00:05:53.326956   97943 main.go:141] libmachine: (ha-070032)   <devices>
	I1210 00:05:53.326966   97943 main.go:141] libmachine: (ha-070032)     <disk type='file' device='cdrom'>
	I1210 00:05:53.326982   97943 main.go:141] libmachine: (ha-070032)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/boot2docker.iso'/>
	I1210 00:05:53.326995   97943 main.go:141] libmachine: (ha-070032)       <target dev='hdc' bus='scsi'/>
	I1210 00:05:53.327012   97943 main.go:141] libmachine: (ha-070032)       <readonly/>
	I1210 00:05:53.327027   97943 main.go:141] libmachine: (ha-070032)     </disk>
	I1210 00:05:53.327039   97943 main.go:141] libmachine: (ha-070032)     <disk type='file' device='disk'>
	I1210 00:05:53.327051   97943 main.go:141] libmachine: (ha-070032)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1210 00:05:53.327066   97943 main.go:141] libmachine: (ha-070032)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/ha-070032.rawdisk'/>
	I1210 00:05:53.327074   97943 main.go:141] libmachine: (ha-070032)       <target dev='hda' bus='virtio'/>
	I1210 00:05:53.327080   97943 main.go:141] libmachine: (ha-070032)     </disk>
	I1210 00:05:53.327086   97943 main.go:141] libmachine: (ha-070032)     <interface type='network'>
	I1210 00:05:53.327091   97943 main.go:141] libmachine: (ha-070032)       <source network='mk-ha-070032'/>
	I1210 00:05:53.327096   97943 main.go:141] libmachine: (ha-070032)       <model type='virtio'/>
	I1210 00:05:53.327101   97943 main.go:141] libmachine: (ha-070032)     </interface>
	I1210 00:05:53.327107   97943 main.go:141] libmachine: (ha-070032)     <interface type='network'>
	I1210 00:05:53.327127   97943 main.go:141] libmachine: (ha-070032)       <source network='default'/>
	I1210 00:05:53.327131   97943 main.go:141] libmachine: (ha-070032)       <model type='virtio'/>
	I1210 00:05:53.327138   97943 main.go:141] libmachine: (ha-070032)     </interface>
	I1210 00:05:53.327142   97943 main.go:141] libmachine: (ha-070032)     <serial type='pty'>
	I1210 00:05:53.327147   97943 main.go:141] libmachine: (ha-070032)       <target port='0'/>
	I1210 00:05:53.327152   97943 main.go:141] libmachine: (ha-070032)     </serial>
	I1210 00:05:53.327157   97943 main.go:141] libmachine: (ha-070032)     <console type='pty'>
	I1210 00:05:53.327167   97943 main.go:141] libmachine: (ha-070032)       <target type='serial' port='0'/>
	I1210 00:05:53.327176   97943 main.go:141] libmachine: (ha-070032)     </console>
	I1210 00:05:53.327183   97943 main.go:141] libmachine: (ha-070032)     <rng model='virtio'>
	I1210 00:05:53.327188   97943 main.go:141] libmachine: (ha-070032)       <backend model='random'>/dev/random</backend>
	I1210 00:05:53.327201   97943 main.go:141] libmachine: (ha-070032)     </rng>
	I1210 00:05:53.327208   97943 main.go:141] libmachine: (ha-070032)     
	I1210 00:05:53.327212   97943 main.go:141] libmachine: (ha-070032)     
	I1210 00:05:53.327219   97943 main.go:141] libmachine: (ha-070032)   </devices>
	I1210 00:05:53.327223   97943 main.go:141] libmachine: (ha-070032) </domain>
	I1210 00:05:53.327229   97943 main.go:141] libmachine: (ha-070032) 
	I1210 00:05:53.331717   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:3e:64:27 in network default
	I1210 00:05:53.332300   97943 main.go:141] libmachine: (ha-070032) Ensuring networks are active...
	I1210 00:05:53.332321   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:53.332935   97943 main.go:141] libmachine: (ha-070032) Ensuring network default is active
	I1210 00:05:53.333268   97943 main.go:141] libmachine: (ha-070032) Ensuring network mk-ha-070032 is active
	I1210 00:05:53.333775   97943 main.go:141] libmachine: (ha-070032) Getting domain xml...
	I1210 00:05:53.334418   97943 main.go:141] libmachine: (ha-070032) Creating domain...
	I1210 00:05:54.486671   97943 main.go:141] libmachine: (ha-070032) Waiting to get IP...
	I1210 00:05:54.487631   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:54.488004   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:54.488023   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:54.487962   97966 retry.go:31] will retry after 250.94638ms: waiting for machine to come up
	I1210 00:05:54.740488   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:54.740898   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:54.740922   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:54.740853   97966 retry.go:31] will retry after 369.652496ms: waiting for machine to come up
	I1210 00:05:55.112670   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:55.113058   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:55.113088   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:55.113006   97966 retry.go:31] will retry after 419.563235ms: waiting for machine to come up
	I1210 00:05:55.534593   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:55.535015   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:55.535042   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:55.534960   97966 retry.go:31] will retry after 426.548067ms: waiting for machine to come up
	I1210 00:05:55.963569   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:55.963962   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:55.963978   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:55.963937   97966 retry.go:31] will retry after 617.965427ms: waiting for machine to come up
	I1210 00:05:56.583725   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:56.584072   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:56.584105   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:56.584063   97966 retry.go:31] will retry after 856.526353ms: waiting for machine to come up
	I1210 00:05:57.442311   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:57.442739   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:57.442796   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:57.442703   97966 retry.go:31] will retry after 1.178569719s: waiting for machine to come up
	I1210 00:05:58.622338   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:58.622797   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:58.622827   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:58.622728   97966 retry.go:31] will retry after 1.42624777s: waiting for machine to come up
	I1210 00:06:00.051240   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:00.051614   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:06:00.051640   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:06:00.051572   97966 retry.go:31] will retry after 1.801666778s: waiting for machine to come up
	I1210 00:06:01.855728   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:01.856159   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:06:01.856181   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:06:01.856123   97966 retry.go:31] will retry after 2.078837624s: waiting for machine to come up
	I1210 00:06:03.936907   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:03.937387   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:06:03.937421   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:06:03.937345   97966 retry.go:31] will retry after 2.395168214s: waiting for machine to come up
	I1210 00:06:06.336012   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:06.336380   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:06:06.336409   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:06:06.336336   97966 retry.go:31] will retry after 2.386978523s: waiting for machine to come up
	I1210 00:06:08.725386   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:08.725781   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:06:08.725809   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:06:08.725749   97966 retry.go:31] will retry after 4.346211813s: waiting for machine to come up
	I1210 00:06:13.073905   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:13.074439   97943 main.go:141] libmachine: (ha-070032) Found IP for machine: 192.168.39.187
	I1210 00:06:13.074469   97943 main.go:141] libmachine: (ha-070032) Reserving static IP address...
	I1210 00:06:13.074487   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has current primary IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:13.075078   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find host DHCP lease matching {name: "ha-070032", mac: "52:54:00:ad:ce:dc", ip: "192.168.39.187"} in network mk-ha-070032
	I1210 00:06:13.145743   97943 main.go:141] libmachine: (ha-070032) DBG | Getting to WaitForSSH function...
	I1210 00:06:13.145776   97943 main.go:141] libmachine: (ha-070032) Reserved static IP address: 192.168.39.187
	I1210 00:06:13.145818   97943 main.go:141] libmachine: (ha-070032) Waiting for SSH to be available...
	I1210 00:06:13.148440   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:13.148825   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032
	I1210 00:06:13.148851   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find defined IP address of network mk-ha-070032 interface with MAC address 52:54:00:ad:ce:dc
	I1210 00:06:13.149012   97943 main.go:141] libmachine: (ha-070032) DBG | Using SSH client type: external
	I1210 00:06:13.149039   97943 main.go:141] libmachine: (ha-070032) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa (-rw-------)
	I1210 00:06:13.149072   97943 main.go:141] libmachine: (ha-070032) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:06:13.149085   97943 main.go:141] libmachine: (ha-070032) DBG | About to run SSH command:
	I1210 00:06:13.149097   97943 main.go:141] libmachine: (ha-070032) DBG | exit 0
	I1210 00:06:13.152933   97943 main.go:141] libmachine: (ha-070032) DBG | SSH cmd err, output: exit status 255: 
	I1210 00:06:13.152951   97943 main.go:141] libmachine: (ha-070032) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1210 00:06:13.152957   97943 main.go:141] libmachine: (ha-070032) DBG | command : exit 0
	I1210 00:06:13.152962   97943 main.go:141] libmachine: (ha-070032) DBG | err     : exit status 255
	I1210 00:06:13.152969   97943 main.go:141] libmachine: (ha-070032) DBG | output  : 
	I1210 00:06:16.155027   97943 main.go:141] libmachine: (ha-070032) DBG | Getting to WaitForSSH function...
	I1210 00:06:16.157296   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.157685   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.157714   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.157840   97943 main.go:141] libmachine: (ha-070032) DBG | Using SSH client type: external
	I1210 00:06:16.157860   97943 main.go:141] libmachine: (ha-070032) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa (-rw-------)
	I1210 00:06:16.157887   97943 main.go:141] libmachine: (ha-070032) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.187 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:06:16.157900   97943 main.go:141] libmachine: (ha-070032) DBG | About to run SSH command:
	I1210 00:06:16.157909   97943 main.go:141] libmachine: (ha-070032) DBG | exit 0
	I1210 00:06:16.278179   97943 main.go:141] libmachine: (ha-070032) DBG | SSH cmd err, output: <nil>: 
	I1210 00:06:16.278456   97943 main.go:141] libmachine: (ha-070032) KVM machine creation complete!
	I1210 00:06:16.278762   97943 main.go:141] libmachine: (ha-070032) Calling .GetConfigRaw
	I1210 00:06:16.279308   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:16.279502   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:16.279643   97943 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1210 00:06:16.279659   97943 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:06:16.280933   97943 main.go:141] libmachine: Detecting operating system of created instance...
	I1210 00:06:16.280956   97943 main.go:141] libmachine: Waiting for SSH to be available...
	I1210 00:06:16.280962   97943 main.go:141] libmachine: Getting to WaitForSSH function...
	I1210 00:06:16.280968   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:16.283215   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.283661   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.283689   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.283820   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:16.283997   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.284144   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.284266   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:16.284430   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:06:16.284659   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:06:16.284672   97943 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1210 00:06:16.381723   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:06:16.381748   97943 main.go:141] libmachine: Detecting the provisioner...
	I1210 00:06:16.381756   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:16.384507   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.384824   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.384850   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.384978   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:16.385166   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.385349   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.385493   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:16.385645   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:06:16.385854   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:06:16.385866   97943 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1210 00:06:16.482791   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1210 00:06:16.482875   97943 main.go:141] libmachine: found compatible host: buildroot
	I1210 00:06:16.482890   97943 main.go:141] libmachine: Provisioning with buildroot...
	I1210 00:06:16.482898   97943 main.go:141] libmachine: (ha-070032) Calling .GetMachineName
	I1210 00:06:16.483155   97943 buildroot.go:166] provisioning hostname "ha-070032"
	I1210 00:06:16.483181   97943 main.go:141] libmachine: (ha-070032) Calling .GetMachineName
	I1210 00:06:16.483360   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:16.485848   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.486193   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.486234   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.486327   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:16.486524   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.486696   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.486841   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:16.486993   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:06:16.487168   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:06:16.487182   97943 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-070032 && echo "ha-070032" | sudo tee /etc/hostname
	I1210 00:06:16.599563   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-070032
	
	I1210 00:06:16.599595   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:16.602261   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.602629   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.602659   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.602789   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:16.603020   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.603241   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.603430   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:16.603599   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:06:16.603761   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:06:16.603781   97943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-070032' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-070032/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-070032' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 00:06:16.710380   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:06:16.710422   97943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 00:06:16.710472   97943 buildroot.go:174] setting up certificates
	I1210 00:06:16.710489   97943 provision.go:84] configureAuth start
	I1210 00:06:16.710503   97943 main.go:141] libmachine: (ha-070032) Calling .GetMachineName
	I1210 00:06:16.710783   97943 main.go:141] libmachine: (ha-070032) Calling .GetIP
	I1210 00:06:16.713296   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.713682   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.713712   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.713807   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:16.716284   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.716639   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.716657   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.716807   97943 provision.go:143] copyHostCerts
	I1210 00:06:16.716848   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:06:16.716882   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 00:06:16.716898   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:06:16.716962   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 00:06:16.717048   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:06:16.717075   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 00:06:16.717082   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:06:16.717107   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 00:06:16.717158   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:06:16.717175   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 00:06:16.717181   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:06:16.717202   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 00:06:16.717253   97943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.ha-070032 san=[127.0.0.1 192.168.39.187 ha-070032 localhost minikube]
	I1210 00:06:16.857455   97943 provision.go:177] copyRemoteCerts
	I1210 00:06:16.857514   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 00:06:16.857542   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:16.860287   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.860660   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.860687   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.860918   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:16.861136   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.861318   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:16.861436   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:06:16.940074   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 00:06:16.940147   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 00:06:16.961938   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 00:06:16.962011   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1210 00:06:16.982947   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 00:06:16.983027   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 00:06:17.003600   97943 provision.go:87] duration metric: took 293.095287ms to configureAuth
	I1210 00:06:17.003631   97943 buildroot.go:189] setting minikube options for container-runtime
	I1210 00:06:17.003823   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:06:17.003908   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:17.006244   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.006580   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.006608   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.006735   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:17.006932   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.007076   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.007191   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:17.007315   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:06:17.007484   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:06:17.007502   97943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 00:06:17.211708   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 00:06:17.211741   97943 main.go:141] libmachine: Checking connection to Docker...
	I1210 00:06:17.211753   97943 main.go:141] libmachine: (ha-070032) Calling .GetURL
	I1210 00:06:17.212951   97943 main.go:141] libmachine: (ha-070032) DBG | Using libvirt version 6000000
	I1210 00:06:17.215245   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.215611   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.215644   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.215769   97943 main.go:141] libmachine: Docker is up and running!
	I1210 00:06:17.215785   97943 main.go:141] libmachine: Reticulating splines...
	I1210 00:06:17.215796   97943 client.go:171] duration metric: took 24.337498941s to LocalClient.Create
	I1210 00:06:17.215826   97943 start.go:167] duration metric: took 24.337582238s to libmachine.API.Create "ha-070032"
	I1210 00:06:17.215839   97943 start.go:293] postStartSetup for "ha-070032" (driver="kvm2")
	I1210 00:06:17.215862   97943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 00:06:17.215886   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:17.216149   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 00:06:17.216177   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:17.218250   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.218590   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.218632   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.218752   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:17.218921   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.219062   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:17.219188   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:06:17.296211   97943 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 00:06:17.300251   97943 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 00:06:17.300276   97943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 00:06:17.300345   97943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 00:06:17.300421   97943 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 00:06:17.300431   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /etc/ssl/certs/862962.pem
	I1210 00:06:17.300529   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 00:06:17.308961   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:06:17.331496   97943 start.go:296] duration metric: took 115.636437ms for postStartSetup
	I1210 00:06:17.331591   97943 main.go:141] libmachine: (ha-070032) Calling .GetConfigRaw
	I1210 00:06:17.332201   97943 main.go:141] libmachine: (ha-070032) Calling .GetIP
	I1210 00:06:17.335151   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.335527   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.335569   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.335747   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:06:17.335921   97943 start.go:128] duration metric: took 24.475725142s to createHost
	I1210 00:06:17.335945   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:17.338044   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.338384   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.338412   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.338541   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:17.338741   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.338882   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.339001   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:17.339163   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:06:17.339337   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:06:17.339348   97943 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 00:06:17.439329   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733789177.417194070
	
	I1210 00:06:17.439361   97943 fix.go:216] guest clock: 1733789177.417194070
	I1210 00:06:17.439372   97943 fix.go:229] Guest: 2024-12-10 00:06:17.41719407 +0000 UTC Remote: 2024-12-10 00:06:17.335933593 +0000 UTC m=+24.582014233 (delta=81.260477ms)
	I1210 00:06:17.439408   97943 fix.go:200] guest clock delta is within tolerance: 81.260477ms
	I1210 00:06:17.439416   97943 start.go:83] releasing machines lock for "ha-070032", held for 24.579311872s
	I1210 00:06:17.439440   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:17.439778   97943 main.go:141] libmachine: (ha-070032) Calling .GetIP
	I1210 00:06:17.442802   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.443261   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.443289   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.443497   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:17.444002   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:17.444206   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:17.444324   97943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 00:06:17.444401   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:17.444474   97943 ssh_runner.go:195] Run: cat /version.json
	I1210 00:06:17.444500   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:17.446933   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.447294   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.447320   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.447352   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.447499   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:17.447688   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.447744   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.447772   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.447844   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:17.447953   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:17.448103   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.448103   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:06:17.448278   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:17.448402   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:06:17.553500   97943 ssh_runner.go:195] Run: systemctl --version
	I1210 00:06:17.559183   97943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 00:06:17.714099   97943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 00:06:17.720445   97943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 00:06:17.720522   97943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 00:06:17.735693   97943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 00:06:17.735715   97943 start.go:495] detecting cgroup driver to use...
	I1210 00:06:17.735777   97943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 00:06:17.750781   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 00:06:17.763333   97943 docker.go:217] disabling cri-docker service (if available) ...
	I1210 00:06:17.763379   97943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 00:06:17.775483   97943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 00:06:17.787288   97943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 00:06:17.890184   97943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 00:06:18.028147   97943 docker.go:233] disabling docker service ...
	I1210 00:06:18.028234   97943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 00:06:18.041611   97943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 00:06:18.054485   97943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 00:06:18.194456   97943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 00:06:18.314202   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 00:06:18.327181   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 00:06:18.343918   97943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 00:06:18.343989   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.353427   97943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 00:06:18.353489   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.362859   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.371991   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.381017   97943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 00:06:18.391381   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.401252   97943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.416290   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.426233   97943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 00:06:18.435267   97943 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 00:06:18.435316   97943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 00:06:18.447946   97943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 00:06:18.456951   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:06:18.573205   97943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 00:06:18.656643   97943 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 00:06:18.656726   97943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 00:06:18.661011   97943 start.go:563] Will wait 60s for crictl version
	I1210 00:06:18.661071   97943 ssh_runner.go:195] Run: which crictl
	I1210 00:06:18.664478   97943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:06:18.701494   97943 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 00:06:18.701578   97943 ssh_runner.go:195] Run: crio --version
	I1210 00:06:18.727238   97943 ssh_runner.go:195] Run: crio --version
	I1210 00:06:18.753327   97943 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 00:06:18.754595   97943 main.go:141] libmachine: (ha-070032) Calling .GetIP
	I1210 00:06:18.756947   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:18.757200   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:18.757235   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:18.757445   97943 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 00:06:18.760940   97943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:06:18.772727   97943 kubeadm.go:883] updating cluster {Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 00:06:18.772828   97943 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:06:18.772879   97943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:06:18.804204   97943 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 00:06:18.804265   97943 ssh_runner.go:195] Run: which lz4
	I1210 00:06:18.807579   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1210 00:06:18.807670   97943 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 00:06:18.811358   97943 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 00:06:18.811386   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1210 00:06:19.965583   97943 crio.go:462] duration metric: took 1.157944737s to copy over tarball
	I1210 00:06:19.965660   97943 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 00:06:21.934864   97943 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.969164039s)
	I1210 00:06:21.934896   97943 crio.go:469] duration metric: took 1.969285734s to extract the tarball
	I1210 00:06:21.934906   97943 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 00:06:21.970025   97943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:06:22.022669   97943 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 00:06:22.022692   97943 cache_images.go:84] Images are preloaded, skipping loading
	I1210 00:06:22.022702   97943 kubeadm.go:934] updating node { 192.168.39.187 8443 v1.31.2 crio true true} ...
	I1210 00:06:22.022843   97943 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-070032 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:06:22.022948   97943 ssh_runner.go:195] Run: crio config
	I1210 00:06:22.066130   97943 cni.go:84] Creating CNI manager for ""
	I1210 00:06:22.066152   97943 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1210 00:06:22.066160   97943 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 00:06:22.066182   97943 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.187 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-070032 NodeName:ha-070032 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 00:06:22.066308   97943 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.187
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-070032"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.187"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.187"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 00:06:22.066339   97943 kube-vip.go:115] generating kube-vip config ...
	I1210 00:06:22.066403   97943 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1210 00:06:22.080860   97943 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1210 00:06:22.080973   97943 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1210 00:06:22.081051   97943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:06:22.089866   97943 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 00:06:22.089923   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1210 00:06:22.098290   97943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1210 00:06:22.112742   97943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:06:22.127069   97943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1210 00:06:22.141317   97943 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1210 00:06:22.155689   97943 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1210 00:06:22.159003   97943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:06:22.169321   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:06:22.288035   97943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:06:22.303534   97943 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032 for IP: 192.168.39.187
	I1210 00:06:22.303559   97943 certs.go:194] generating shared ca certs ...
	I1210 00:06:22.303580   97943 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:22.303764   97943 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 00:06:22.303807   97943 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 00:06:22.303816   97943 certs.go:256] generating profile certs ...
	I1210 00:06:22.303867   97943 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key
	I1210 00:06:22.303881   97943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.crt with IP's: []
	I1210 00:06:22.579094   97943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.crt ...
	I1210 00:06:22.579127   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.crt: {Name:mk6da1df398501169ebaa4be6e0991a8cdf439ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:22.579330   97943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key ...
	I1210 00:06:22.579344   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key: {Name:mkcfad0deb7a44a0416ffc9ec52ed32ba5314a9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:22.579449   97943 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.e24980b8
	I1210 00:06:22.579465   97943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.e24980b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.187 192.168.39.254]
	I1210 00:06:22.676685   97943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.e24980b8 ...
	I1210 00:06:22.676712   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.e24980b8: {Name:mke16dbfb98e7219f2bbc6176b557aae983cf59d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:22.676895   97943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.e24980b8 ...
	I1210 00:06:22.676911   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.e24980b8: {Name:mke38a755e8856925c614e9671ffbd341e4bacfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:22.677005   97943 certs.go:381] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.e24980b8 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt
	I1210 00:06:22.677102   97943 certs.go:385] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.e24980b8 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key
	I1210 00:06:22.677175   97943 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key
	I1210 00:06:22.677191   97943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt with IP's: []
	I1210 00:06:23.248653   97943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt ...
	I1210 00:06:23.248694   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt: {Name:mk109f5f541d0487f6eee37e10618be0687d2257 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:23.248940   97943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key ...
	I1210 00:06:23.248958   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key: {Name:mkb6a55c3dbe59a4c5c10d115460729fd5017c90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:23.249084   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 00:06:23.249122   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 00:06:23.249145   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 00:06:23.249169   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 00:06:23.249185   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 00:06:23.249208   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 00:06:23.249231   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 00:06:23.249252   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 00:06:23.249332   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 00:06:23.249393   97943 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 00:06:23.249407   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:06:23.249449   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 00:06:23.249487   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:06:23.249528   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 00:06:23.249593   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:06:23.249643   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:06:23.249668   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem -> /usr/share/ca-certificates/86296.pem
	I1210 00:06:23.249692   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /usr/share/ca-certificates/862962.pem
	I1210 00:06:23.250316   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:06:23.282882   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 00:06:23.307116   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:06:23.329842   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:06:23.350860   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 00:06:23.371360   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 00:06:23.391801   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:06:23.412467   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:06:23.433690   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:06:23.454439   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 00:06:23.475132   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 00:06:23.495728   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 00:06:23.510105   97943 ssh_runner.go:195] Run: openssl version
	I1210 00:06:23.515363   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:06:23.524990   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:06:23.528859   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:06:23.528911   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:06:23.534177   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:06:23.544011   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 00:06:23.554049   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 00:06:23.558290   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 00:06:23.558341   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 00:06:23.563770   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 00:06:23.574235   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 00:06:23.584591   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 00:06:23.588826   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 00:06:23.588880   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 00:06:23.594177   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:06:23.604355   97943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:06:23.608126   97943 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 00:06:23.608176   97943 kubeadm.go:392] StartCluster: {Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:06:23.608256   97943 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 00:06:23.608313   97943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:06:23.644503   97943 cri.go:89] found id: ""
	I1210 00:06:23.644571   97943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 00:06:23.653924   97943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:06:23.666641   97943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:06:23.677490   97943 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:06:23.677512   97943 kubeadm.go:157] found existing configuration files:
	
	I1210 00:06:23.677553   97943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:06:23.685837   97943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:06:23.685897   97943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:06:23.696600   97943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:06:23.706796   97943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:06:23.706854   97943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:06:23.717362   97943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:06:23.727400   97943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:06:23.727453   97943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:06:23.737844   97943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:06:23.747833   97943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:06:23.747889   97943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:06:23.758170   97943 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:06:23.860329   97943 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 00:06:23.860398   97943 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:06:23.982444   97943 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:06:23.982606   97943 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:06:23.982761   97943 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 00:06:23.992051   97943 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:06:24.260435   97943 out.go:235]   - Generating certificates and keys ...
	I1210 00:06:24.260672   97943 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:06:24.260758   97943 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:06:24.260858   97943 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 00:06:24.290159   97943 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1210 00:06:24.463743   97943 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1210 00:06:24.802277   97943 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1210 00:06:24.950429   97943 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1210 00:06:24.950692   97943 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-070032 localhost] and IPs [192.168.39.187 127.0.0.1 ::1]
	I1210 00:06:25.094704   97943 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1210 00:06:25.094857   97943 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-070032 localhost] and IPs [192.168.39.187 127.0.0.1 ::1]
	I1210 00:06:25.315955   97943 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 00:06:25.908434   97943 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 00:06:26.061724   97943 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1210 00:06:26.061977   97943 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:06:26.261701   97943 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:06:26.508681   97943 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 00:06:26.626369   97943 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:06:26.773060   97943 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:06:26.898048   97943 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:06:26.900096   97943 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:06:26.903197   97943 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:06:26.904929   97943 out.go:235]   - Booting up control plane ...
	I1210 00:06:26.905029   97943 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:06:26.905121   97943 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:06:26.905279   97943 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:06:26.919661   97943 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:06:26.926359   97943 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:06:26.926414   97943 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:06:27.050156   97943 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 00:06:27.050350   97943 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 00:06:27.551278   97943 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.620144ms
	I1210 00:06:27.551408   97943 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 00:06:33.591605   97943 kubeadm.go:310] [api-check] The API server is healthy after 6.043312277s
	I1210 00:06:33.609669   97943 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 00:06:33.625260   97943 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 00:06:33.653756   97943 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 00:06:33.653955   97943 kubeadm.go:310] [mark-control-plane] Marking the node ha-070032 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 00:06:33.666679   97943 kubeadm.go:310] [bootstrap-token] Using token: j34izu.9ybowi8hhzn9pxj2
	I1210 00:06:33.668028   97943 out.go:235]   - Configuring RBAC rules ...
	I1210 00:06:33.668176   97943 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 00:06:33.684358   97943 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 00:06:33.695755   97943 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 00:06:33.698959   97943 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 00:06:33.704573   97943 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 00:06:33.710289   97943 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 00:06:34.000325   97943 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 00:06:34.440225   97943 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 00:06:35.001489   97943 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 00:06:35.002397   97943 kubeadm.go:310] 
	I1210 00:06:35.002481   97943 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 00:06:35.002492   97943 kubeadm.go:310] 
	I1210 00:06:35.002620   97943 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 00:06:35.002641   97943 kubeadm.go:310] 
	I1210 00:06:35.002668   97943 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 00:06:35.002729   97943 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 00:06:35.002789   97943 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 00:06:35.002807   97943 kubeadm.go:310] 
	I1210 00:06:35.002880   97943 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 00:06:35.002909   97943 kubeadm.go:310] 
	I1210 00:06:35.002973   97943 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 00:06:35.002982   97943 kubeadm.go:310] 
	I1210 00:06:35.003062   97943 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 00:06:35.003170   97943 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 00:06:35.003276   97943 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 00:06:35.003287   97943 kubeadm.go:310] 
	I1210 00:06:35.003407   97943 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 00:06:35.003521   97943 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 00:06:35.003539   97943 kubeadm.go:310] 
	I1210 00:06:35.003652   97943 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token j34izu.9ybowi8hhzn9pxj2 \
	I1210 00:06:35.003744   97943 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 \
	I1210 00:06:35.003795   97943 kubeadm.go:310] 	--control-plane 
	I1210 00:06:35.003809   97943 kubeadm.go:310] 
	I1210 00:06:35.003925   97943 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 00:06:35.003934   97943 kubeadm.go:310] 
	I1210 00:06:35.004033   97943 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token j34izu.9ybowi8hhzn9pxj2 \
	I1210 00:06:35.004174   97943 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 
	I1210 00:06:35.004857   97943 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:06:35.005000   97943 cni.go:84] Creating CNI manager for ""
	I1210 00:06:35.005014   97943 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1210 00:06:35.006644   97943 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1210 00:06:35.007773   97943 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 00:06:35.013278   97943 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1210 00:06:35.013292   97943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 00:06:35.030575   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 00:06:35.430253   97943 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 00:06:35.430379   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-070032 minikube.k8s.io/updated_at=2024_12_10T00_06_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=ha-070032 minikube.k8s.io/primary=true
	I1210 00:06:35.430379   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:35.453581   97943 ops.go:34] apiserver oom_adj: -16
	I1210 00:06:35.589407   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:36.090147   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:36.590386   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:37.089563   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:37.589509   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:38.090045   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:38.590492   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:38.670226   97943 kubeadm.go:1113] duration metric: took 3.23992517s to wait for elevateKubeSystemPrivileges
	I1210 00:06:38.670279   97943 kubeadm.go:394] duration metric: took 15.062107151s to StartCluster
	I1210 00:06:38.670305   97943 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:38.670408   97943 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:06:38.671197   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:38.671402   97943 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:06:38.671412   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 00:06:38.671420   97943 start.go:241] waiting for startup goroutines ...
	I1210 00:06:38.671426   97943 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 00:06:38.671508   97943 addons.go:69] Setting storage-provisioner=true in profile "ha-070032"
	I1210 00:06:38.671518   97943 addons.go:69] Setting default-storageclass=true in profile "ha-070032"
	I1210 00:06:38.671525   97943 addons.go:234] Setting addon storage-provisioner=true in "ha-070032"
	I1210 00:06:38.671543   97943 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-070032"
	I1210 00:06:38.671557   97943 host.go:66] Checking if "ha-070032" exists ...
	I1210 00:06:38.671580   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:06:38.671976   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:06:38.672006   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:06:38.672032   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:38.672011   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:38.687036   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41675
	I1210 00:06:38.687249   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36941
	I1210 00:06:38.687528   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:38.687798   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:38.688109   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:06:38.688138   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:38.688273   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:06:38.688294   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:38.688523   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:38.688665   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:38.688726   97943 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:06:38.689111   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:06:38.689137   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:38.690837   97943 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:06:38.691061   97943 kapi.go:59] client config for ha-070032: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.crt", KeyFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key", CAFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 00:06:38.691470   97943 cert_rotation.go:140] Starting client certificate rotation controller
	I1210 00:06:38.691733   97943 addons.go:234] Setting addon default-storageclass=true in "ha-070032"
	I1210 00:06:38.691777   97943 host.go:66] Checking if "ha-070032" exists ...
	I1210 00:06:38.692023   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:06:38.692051   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:38.704916   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43013
	I1210 00:06:38.705299   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:38.705773   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:06:38.705793   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:38.705818   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43161
	I1210 00:06:38.706223   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:38.706266   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:38.706378   97943 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:06:38.706814   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:06:38.706838   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:38.707185   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:38.707762   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:06:38.707794   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:38.707810   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:38.709839   97943 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:06:38.711065   97943 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:06:38.711090   97943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 00:06:38.711109   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:38.713927   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:38.714361   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:38.714394   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:38.714642   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:38.714813   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:38.715016   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:38.715175   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:06:38.722431   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33307
	I1210 00:06:38.722864   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:38.723276   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:06:38.723296   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:38.723661   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:38.723828   97943 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:06:38.725166   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:38.725377   97943 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 00:06:38.725391   97943 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 00:06:38.725405   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:38.727990   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:38.728394   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:38.728425   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:38.728556   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:38.728718   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:38.728851   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:38.729006   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:06:38.796897   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 00:06:38.828298   97943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:06:38.901174   97943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 00:06:39.211073   97943 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1210 00:06:39.326332   97943 main.go:141] libmachine: Making call to close driver server
	I1210 00:06:39.326356   97943 main.go:141] libmachine: (ha-070032) Calling .Close
	I1210 00:06:39.326414   97943 main.go:141] libmachine: Making call to close driver server
	I1210 00:06:39.326438   97943 main.go:141] libmachine: (ha-070032) Calling .Close
	I1210 00:06:39.326675   97943 main.go:141] libmachine: (ha-070032) DBG | Closing plugin on server side
	I1210 00:06:39.326704   97943 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:06:39.326718   97943 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:06:39.326722   97943 main.go:141] libmachine: (ha-070032) DBG | Closing plugin on server side
	I1210 00:06:39.326732   97943 main.go:141] libmachine: Making call to close driver server
	I1210 00:06:39.326740   97943 main.go:141] libmachine: (ha-070032) Calling .Close
	I1210 00:06:39.326767   97943 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:06:39.326783   97943 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:06:39.326792   97943 main.go:141] libmachine: Making call to close driver server
	I1210 00:06:39.326799   97943 main.go:141] libmachine: (ha-070032) Calling .Close
	I1210 00:06:39.326952   97943 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:06:39.326963   97943 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:06:39.327027   97943 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:06:39.327032   97943 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 00:06:39.327042   97943 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:06:39.327048   97943 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 00:06:39.327148   97943 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1210 00:06:39.327161   97943 round_trippers.go:469] Request Headers:
	I1210 00:06:39.327179   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:06:39.327194   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:06:39.340698   97943 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1210 00:06:39.341273   97943 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1210 00:06:39.341288   97943 round_trippers.go:469] Request Headers:
	I1210 00:06:39.341295   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:06:39.341298   97943 round_trippers.go:473]     Content-Type: application/json
	I1210 00:06:39.341303   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:06:39.344902   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:06:39.345090   97943 main.go:141] libmachine: Making call to close driver server
	I1210 00:06:39.345105   97943 main.go:141] libmachine: (ha-070032) Calling .Close
	I1210 00:06:39.345391   97943 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:06:39.345413   97943 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:06:39.345420   97943 main.go:141] libmachine: (ha-070032) DBG | Closing plugin on server side
	I1210 00:06:39.347624   97943 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1210 00:06:39.348926   97943 addons.go:510] duration metric: took 677.497681ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 00:06:39.348959   97943 start.go:246] waiting for cluster config update ...
	I1210 00:06:39.348973   97943 start.go:255] writing updated cluster config ...
	I1210 00:06:39.350585   97943 out.go:201] 
	I1210 00:06:39.351879   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:06:39.351939   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:06:39.353507   97943 out.go:177] * Starting "ha-070032-m02" control-plane node in "ha-070032" cluster
	I1210 00:06:39.354653   97943 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:06:39.354670   97943 cache.go:56] Caching tarball of preloaded images
	I1210 00:06:39.354757   97943 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 00:06:39.354768   97943 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 00:06:39.354822   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:06:39.354986   97943 start.go:360] acquireMachinesLock for ha-070032-m02: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:06:39.355029   97943 start.go:364] duration metric: took 24.389µs to acquireMachinesLock for "ha-070032-m02"
	I1210 00:06:39.355043   97943 start.go:93] Provisioning new machine with config: &{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:06:39.355103   97943 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1210 00:06:39.356785   97943 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1210 00:06:39.356859   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:06:39.356884   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:39.373740   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41069
	I1210 00:06:39.374206   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:39.374743   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:06:39.374764   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:39.375056   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:39.375244   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetMachineName
	I1210 00:06:39.375358   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:06:39.375496   97943 start.go:159] libmachine.API.Create for "ha-070032" (driver="kvm2")
	I1210 00:06:39.375520   97943 client.go:168] LocalClient.Create starting
	I1210 00:06:39.375545   97943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem
	I1210 00:06:39.375577   97943 main.go:141] libmachine: Decoding PEM data...
	I1210 00:06:39.375591   97943 main.go:141] libmachine: Parsing certificate...
	I1210 00:06:39.375644   97943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem
	I1210 00:06:39.375662   97943 main.go:141] libmachine: Decoding PEM data...
	I1210 00:06:39.375672   97943 main.go:141] libmachine: Parsing certificate...
	I1210 00:06:39.375686   97943 main.go:141] libmachine: Running pre-create checks...
	I1210 00:06:39.375694   97943 main.go:141] libmachine: (ha-070032-m02) Calling .PreCreateCheck
	I1210 00:06:39.375822   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetConfigRaw
	I1210 00:06:39.376224   97943 main.go:141] libmachine: Creating machine...
	I1210 00:06:39.376240   97943 main.go:141] libmachine: (ha-070032-m02) Calling .Create
	I1210 00:06:39.376365   97943 main.go:141] libmachine: (ha-070032-m02) Creating KVM machine...
	I1210 00:06:39.377639   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found existing default KVM network
	I1210 00:06:39.377788   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found existing private KVM network mk-ha-070032
	I1210 00:06:39.377977   97943 main.go:141] libmachine: (ha-070032-m02) Setting up store path in /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02 ...
	I1210 00:06:39.378006   97943 main.go:141] libmachine: (ha-070032-m02) Building disk image from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1210 00:06:39.378048   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:39.377952   98310 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:06:39.378126   97943 main.go:141] libmachine: (ha-070032-m02) Downloading /home/jenkins/minikube-integration/20062-79135/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1210 00:06:39.655003   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:39.654863   98310 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa...
	I1210 00:06:39.917373   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:39.917261   98310 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/ha-070032-m02.rawdisk...
	I1210 00:06:39.917409   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Writing magic tar header
	I1210 00:06:39.917424   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Writing SSH key tar header
	I1210 00:06:39.917437   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:39.917371   98310 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02 ...
	I1210 00:06:39.917498   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02
	I1210 00:06:39.917529   97943 main.go:141] libmachine: (ha-070032-m02) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02 (perms=drwx------)
	I1210 00:06:39.917548   97943 main.go:141] libmachine: (ha-070032-m02) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines (perms=drwxr-xr-x)
	I1210 00:06:39.917560   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines
	I1210 00:06:39.917572   97943 main.go:141] libmachine: (ha-070032-m02) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube (perms=drwxr-xr-x)
	I1210 00:06:39.917584   97943 main.go:141] libmachine: (ha-070032-m02) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135 (perms=drwxrwxr-x)
	I1210 00:06:39.917605   97943 main.go:141] libmachine: (ha-070032-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 00:06:39.917616   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:06:39.917629   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135
	I1210 00:06:39.917642   97943 main.go:141] libmachine: (ha-070032-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 00:06:39.917652   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1210 00:06:39.917664   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home/jenkins
	I1210 00:06:39.917673   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home
	I1210 00:06:39.917683   97943 main.go:141] libmachine: (ha-070032-m02) Creating domain...
	I1210 00:06:39.917707   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Skipping /home - not owner
	I1210 00:06:39.918676   97943 main.go:141] libmachine: (ha-070032-m02) define libvirt domain using xml: 
	I1210 00:06:39.918698   97943 main.go:141] libmachine: (ha-070032-m02) <domain type='kvm'>
	I1210 00:06:39.918768   97943 main.go:141] libmachine: (ha-070032-m02)   <name>ha-070032-m02</name>
	I1210 00:06:39.918816   97943 main.go:141] libmachine: (ha-070032-m02)   <memory unit='MiB'>2200</memory>
	I1210 00:06:39.918844   97943 main.go:141] libmachine: (ha-070032-m02)   <vcpu>2</vcpu>
	I1210 00:06:39.918860   97943 main.go:141] libmachine: (ha-070032-m02)   <features>
	I1210 00:06:39.918868   97943 main.go:141] libmachine: (ha-070032-m02)     <acpi/>
	I1210 00:06:39.918874   97943 main.go:141] libmachine: (ha-070032-m02)     <apic/>
	I1210 00:06:39.918881   97943 main.go:141] libmachine: (ha-070032-m02)     <pae/>
	I1210 00:06:39.918890   97943 main.go:141] libmachine: (ha-070032-m02)     
	I1210 00:06:39.918898   97943 main.go:141] libmachine: (ha-070032-m02)   </features>
	I1210 00:06:39.918908   97943 main.go:141] libmachine: (ha-070032-m02)   <cpu mode='host-passthrough'>
	I1210 00:06:39.918914   97943 main.go:141] libmachine: (ha-070032-m02)   
	I1210 00:06:39.918920   97943 main.go:141] libmachine: (ha-070032-m02)   </cpu>
	I1210 00:06:39.918932   97943 main.go:141] libmachine: (ha-070032-m02)   <os>
	I1210 00:06:39.918939   97943 main.go:141] libmachine: (ha-070032-m02)     <type>hvm</type>
	I1210 00:06:39.918951   97943 main.go:141] libmachine: (ha-070032-m02)     <boot dev='cdrom'/>
	I1210 00:06:39.918960   97943 main.go:141] libmachine: (ha-070032-m02)     <boot dev='hd'/>
	I1210 00:06:39.918969   97943 main.go:141] libmachine: (ha-070032-m02)     <bootmenu enable='no'/>
	I1210 00:06:39.918978   97943 main.go:141] libmachine: (ha-070032-m02)   </os>
	I1210 00:06:39.918985   97943 main.go:141] libmachine: (ha-070032-m02)   <devices>
	I1210 00:06:39.918996   97943 main.go:141] libmachine: (ha-070032-m02)     <disk type='file' device='cdrom'>
	I1210 00:06:39.919011   97943 main.go:141] libmachine: (ha-070032-m02)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/boot2docker.iso'/>
	I1210 00:06:39.919023   97943 main.go:141] libmachine: (ha-070032-m02)       <target dev='hdc' bus='scsi'/>
	I1210 00:06:39.919034   97943 main.go:141] libmachine: (ha-070032-m02)       <readonly/>
	I1210 00:06:39.919044   97943 main.go:141] libmachine: (ha-070032-m02)     </disk>
	I1210 00:06:39.919053   97943 main.go:141] libmachine: (ha-070032-m02)     <disk type='file' device='disk'>
	I1210 00:06:39.919066   97943 main.go:141] libmachine: (ha-070032-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1210 00:06:39.919085   97943 main.go:141] libmachine: (ha-070032-m02)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/ha-070032-m02.rawdisk'/>
	I1210 00:06:39.919096   97943 main.go:141] libmachine: (ha-070032-m02)       <target dev='hda' bus='virtio'/>
	I1210 00:06:39.919106   97943 main.go:141] libmachine: (ha-070032-m02)     </disk>
	I1210 00:06:39.919113   97943 main.go:141] libmachine: (ha-070032-m02)     <interface type='network'>
	I1210 00:06:39.919121   97943 main.go:141] libmachine: (ha-070032-m02)       <source network='mk-ha-070032'/>
	I1210 00:06:39.919132   97943 main.go:141] libmachine: (ha-070032-m02)       <model type='virtio'/>
	I1210 00:06:39.919140   97943 main.go:141] libmachine: (ha-070032-m02)     </interface>
	I1210 00:06:39.919150   97943 main.go:141] libmachine: (ha-070032-m02)     <interface type='network'>
	I1210 00:06:39.919158   97943 main.go:141] libmachine: (ha-070032-m02)       <source network='default'/>
	I1210 00:06:39.919168   97943 main.go:141] libmachine: (ha-070032-m02)       <model type='virtio'/>
	I1210 00:06:39.919177   97943 main.go:141] libmachine: (ha-070032-m02)     </interface>
	I1210 00:06:39.919187   97943 main.go:141] libmachine: (ha-070032-m02)     <serial type='pty'>
	I1210 00:06:39.919201   97943 main.go:141] libmachine: (ha-070032-m02)       <target port='0'/>
	I1210 00:06:39.919211   97943 main.go:141] libmachine: (ha-070032-m02)     </serial>
	I1210 00:06:39.919220   97943 main.go:141] libmachine: (ha-070032-m02)     <console type='pty'>
	I1210 00:06:39.919230   97943 main.go:141] libmachine: (ha-070032-m02)       <target type='serial' port='0'/>
	I1210 00:06:39.919239   97943 main.go:141] libmachine: (ha-070032-m02)     </console>
	I1210 00:06:39.919249   97943 main.go:141] libmachine: (ha-070032-m02)     <rng model='virtio'>
	I1210 00:06:39.919261   97943 main.go:141] libmachine: (ha-070032-m02)       <backend model='random'>/dev/random</backend>
	I1210 00:06:39.919271   97943 main.go:141] libmachine: (ha-070032-m02)     </rng>
	I1210 00:06:39.919278   97943 main.go:141] libmachine: (ha-070032-m02)     
	I1210 00:06:39.919287   97943 main.go:141] libmachine: (ha-070032-m02)     
	I1210 00:06:39.919296   97943 main.go:141] libmachine: (ha-070032-m02)   </devices>
	I1210 00:06:39.919305   97943 main.go:141] libmachine: (ha-070032-m02) </domain>
	I1210 00:06:39.919315   97943 main.go:141] libmachine: (ha-070032-m02) 
	I1210 00:06:39.926117   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:48:53:e3 in network default
	I1210 00:06:39.926859   97943 main.go:141] libmachine: (ha-070032-m02) Ensuring networks are active...
	I1210 00:06:39.926888   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:39.927703   97943 main.go:141] libmachine: (ha-070032-m02) Ensuring network default is active
	I1210 00:06:39.928027   97943 main.go:141] libmachine: (ha-070032-m02) Ensuring network mk-ha-070032 is active
	I1210 00:06:39.928408   97943 main.go:141] libmachine: (ha-070032-m02) Getting domain xml...
	I1210 00:06:39.929223   97943 main.go:141] libmachine: (ha-070032-m02) Creating domain...
	I1210 00:06:41.130495   97943 main.go:141] libmachine: (ha-070032-m02) Waiting to get IP...
	I1210 00:06:41.131359   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:41.131738   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:41.131767   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:41.131705   98310 retry.go:31] will retry after 310.664463ms: waiting for machine to come up
	I1210 00:06:41.444273   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:41.444703   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:41.444737   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:41.444646   98310 retry.go:31] will retry after 238.189723ms: waiting for machine to come up
	I1210 00:06:41.683967   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:41.684372   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:41.684404   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:41.684311   98310 retry.go:31] will retry after 302.841079ms: waiting for machine to come up
	I1210 00:06:41.988975   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:41.989468   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:41.989592   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:41.989406   98310 retry.go:31] will retry after 546.191287ms: waiting for machine to come up
	I1210 00:06:42.536796   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:42.537343   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:42.537376   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:42.537279   98310 retry.go:31] will retry after 759.959183ms: waiting for machine to come up
	I1210 00:06:43.299192   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:43.299592   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:43.299618   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:43.299550   98310 retry.go:31] will retry after 662.514804ms: waiting for machine to come up
	I1210 00:06:43.963192   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:43.963574   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:43.963604   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:43.963510   98310 retry.go:31] will retry after 928.068602ms: waiting for machine to come up
	I1210 00:06:44.892786   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:44.893282   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:44.893308   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:44.893234   98310 retry.go:31] will retry after 1.121647824s: waiting for machine to come up
	I1210 00:06:46.016637   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:46.017063   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:46.017120   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:46.017054   98310 retry.go:31] will retry after 1.26533881s: waiting for machine to come up
	I1210 00:06:47.283663   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:47.284077   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:47.284103   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:47.284029   98310 retry.go:31] will retry after 1.959318884s: waiting for machine to come up
	I1210 00:06:49.245134   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:49.245690   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:49.245721   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:49.245628   98310 retry.go:31] will retry after 2.080479898s: waiting for machine to come up
	I1210 00:06:51.327593   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:51.327959   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:51.327986   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:51.327912   98310 retry.go:31] will retry after 3.384865721s: waiting for machine to come up
	I1210 00:06:54.714736   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:54.715082   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:54.715116   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:54.715033   98310 retry.go:31] will retry after 4.262963095s: waiting for machine to come up
	I1210 00:06:58.982522   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:58.982919   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:58.982944   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:58.982868   98310 retry.go:31] will retry after 4.754254966s: waiting for machine to come up
	I1210 00:07:03.739570   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:03.740201   97943 main.go:141] libmachine: (ha-070032-m02) Found IP for machine: 192.168.39.198
	I1210 00:07:03.740228   97943 main.go:141] libmachine: (ha-070032-m02) Reserving static IP address...
	I1210 00:07:03.740250   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has current primary IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:03.740875   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find host DHCP lease matching {name: "ha-070032-m02", mac: "52:54:00:a4:53:39", ip: "192.168.39.198"} in network mk-ha-070032
	I1210 00:07:03.810694   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Getting to WaitForSSH function...
	I1210 00:07:03.810726   97943 main.go:141] libmachine: (ha-070032-m02) Reserved static IP address: 192.168.39.198
	I1210 00:07:03.810777   97943 main.go:141] libmachine: (ha-070032-m02) Waiting for SSH to be available...
	I1210 00:07:03.813164   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:03.813481   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032
	I1210 00:07:03.813508   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find defined IP address of network mk-ha-070032 interface with MAC address 52:54:00:a4:53:39
	I1210 00:07:03.813691   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Using SSH client type: external
	I1210 00:07:03.813726   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa (-rw-------)
	I1210 00:07:03.813759   97943 main.go:141] libmachine: (ha-070032-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:07:03.813774   97943 main.go:141] libmachine: (ha-070032-m02) DBG | About to run SSH command:
	I1210 00:07:03.813802   97943 main.go:141] libmachine: (ha-070032-m02) DBG | exit 0
	I1210 00:07:03.817377   97943 main.go:141] libmachine: (ha-070032-m02) DBG | SSH cmd err, output: exit status 255: 
	I1210 00:07:03.817395   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1210 00:07:03.817406   97943 main.go:141] libmachine: (ha-070032-m02) DBG | command : exit 0
	I1210 00:07:03.817413   97943 main.go:141] libmachine: (ha-070032-m02) DBG | err     : exit status 255
	I1210 00:07:03.817429   97943 main.go:141] libmachine: (ha-070032-m02) DBG | output  : 
	I1210 00:07:06.818972   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Getting to WaitForSSH function...
	I1210 00:07:06.821618   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:06.822027   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:06.822055   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:06.822215   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Using SSH client type: external
	I1210 00:07:06.822245   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa (-rw-------)
	I1210 00:07:06.822283   97943 main.go:141] libmachine: (ha-070032-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:07:06.822309   97943 main.go:141] libmachine: (ha-070032-m02) DBG | About to run SSH command:
	I1210 00:07:06.822322   97943 main.go:141] libmachine: (ha-070032-m02) DBG | exit 0
	I1210 00:07:06.950206   97943 main.go:141] libmachine: (ha-070032-m02) DBG | SSH cmd err, output: <nil>: 
	I1210 00:07:06.950523   97943 main.go:141] libmachine: (ha-070032-m02) KVM machine creation complete!
	I1210 00:07:06.950797   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetConfigRaw
	I1210 00:07:06.951365   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:06.951576   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:06.951700   97943 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1210 00:07:06.951712   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetState
	I1210 00:07:06.952852   97943 main.go:141] libmachine: Detecting operating system of created instance...
	I1210 00:07:06.952870   97943 main.go:141] libmachine: Waiting for SSH to be available...
	I1210 00:07:06.952875   97943 main.go:141] libmachine: Getting to WaitForSSH function...
	I1210 00:07:06.952881   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:06.955132   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:06.955556   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:06.955577   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:06.955708   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:06.955904   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:06.956047   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:06.956157   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:06.956344   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:07:06.956613   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1210 00:07:06.956635   97943 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1210 00:07:07.065432   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:07:07.065465   97943 main.go:141] libmachine: Detecting the provisioner...
	I1210 00:07:07.065472   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:07.068281   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.068647   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:07.068676   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.068789   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:07.069000   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:07.069205   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:07.069353   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:07.069507   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:07:07.069682   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1210 00:07:07.069696   97943 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1210 00:07:07.179172   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1210 00:07:07.179254   97943 main.go:141] libmachine: found compatible host: buildroot
	I1210 00:07:07.179270   97943 main.go:141] libmachine: Provisioning with buildroot...
	I1210 00:07:07.179281   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetMachineName
	I1210 00:07:07.179507   97943 buildroot.go:166] provisioning hostname "ha-070032-m02"
	I1210 00:07:07.179525   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetMachineName
	I1210 00:07:07.179714   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:07.182380   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.182709   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:07.182735   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.182903   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:07.183097   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:07.183236   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:07.183392   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:07.183547   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:07:07.183709   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1210 00:07:07.183720   97943 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-070032-m02 && echo "ha-070032-m02" | sudo tee /etc/hostname
	I1210 00:07:07.308107   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-070032-m02
	
	I1210 00:07:07.308157   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:07.310796   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.311128   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:07.311159   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.311367   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:07.311544   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:07.311697   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:07.311834   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:07.312007   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:07:07.312178   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1210 00:07:07.312195   97943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-070032-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-070032-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-070032-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 00:07:07.430746   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:07:07.430783   97943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 00:07:07.430808   97943 buildroot.go:174] setting up certificates
	I1210 00:07:07.430826   97943 provision.go:84] configureAuth start
	I1210 00:07:07.430840   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetMachineName
	I1210 00:07:07.431122   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetIP
	I1210 00:07:07.433939   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.434313   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:07.434337   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.434511   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:07.436908   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.437220   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:07.437245   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.437409   97943 provision.go:143] copyHostCerts
	I1210 00:07:07.437448   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:07:07.437491   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 00:07:07.437503   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:07:07.437576   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 00:07:07.437681   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:07:07.437707   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 00:07:07.437715   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:07:07.437755   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 00:07:07.437820   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:07:07.437852   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 00:07:07.437861   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:07:07.437895   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 00:07:07.437968   97943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.ha-070032-m02 san=[127.0.0.1 192.168.39.198 ha-070032-m02 localhost minikube]
	I1210 00:07:08.044773   97943 provision.go:177] copyRemoteCerts
	I1210 00:07:08.044851   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 00:07:08.044891   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:08.047538   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.047846   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.047877   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.048076   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:08.048336   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.048503   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:08.048649   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa Username:docker}
	I1210 00:07:08.132237   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 00:07:08.132310   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 00:07:08.154520   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 00:07:08.154605   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 00:07:08.175951   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 00:07:08.176034   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 00:07:08.197284   97943 provision.go:87] duration metric: took 766.441651ms to configureAuth
	I1210 00:07:08.197318   97943 buildroot.go:189] setting minikube options for container-runtime
	I1210 00:07:08.197534   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:07:08.197630   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:08.200256   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.200605   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.200631   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.200777   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:08.200956   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.201156   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.201290   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:08.201439   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:07:08.201609   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1210 00:07:08.201622   97943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 00:07:08.422427   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 00:07:08.422470   97943 main.go:141] libmachine: Checking connection to Docker...
	I1210 00:07:08.422479   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetURL
	I1210 00:07:08.423873   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Using libvirt version 6000000
	I1210 00:07:08.426057   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.426388   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.426419   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.426586   97943 main.go:141] libmachine: Docker is up and running!
	I1210 00:07:08.426605   97943 main.go:141] libmachine: Reticulating splines...
	I1210 00:07:08.426616   97943 client.go:171] duration metric: took 29.051087497s to LocalClient.Create
	I1210 00:07:08.426651   97943 start.go:167] duration metric: took 29.051156503s to libmachine.API.Create "ha-070032"
	I1210 00:07:08.426663   97943 start.go:293] postStartSetup for "ha-070032-m02" (driver="kvm2")
	I1210 00:07:08.426676   97943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 00:07:08.426697   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:08.426973   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 00:07:08.427006   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:08.429163   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.429425   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.429445   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.429585   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:08.429771   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.429939   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:08.430073   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa Username:docker}
	I1210 00:07:08.511841   97943 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 00:07:08.515628   97943 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 00:07:08.515647   97943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 00:07:08.515716   97943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 00:07:08.515790   97943 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 00:07:08.515798   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /etc/ssl/certs/862962.pem
	I1210 00:07:08.515877   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 00:07:08.524177   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:07:08.545083   97943 start.go:296] duration metric: took 118.406585ms for postStartSetup
	I1210 00:07:08.545129   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetConfigRaw
	I1210 00:07:08.545727   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetIP
	I1210 00:07:08.548447   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.548762   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.548790   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.549019   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:07:08.549239   97943 start.go:128] duration metric: took 29.194124447s to createHost
	I1210 00:07:08.549263   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:08.551249   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.551581   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.551601   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.551788   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:08.551950   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.552104   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.552224   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:08.552368   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:07:08.552535   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1210 00:07:08.552544   97943 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 00:07:08.658708   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733789228.640009863
	
	I1210 00:07:08.658732   97943 fix.go:216] guest clock: 1733789228.640009863
	I1210 00:07:08.658742   97943 fix.go:229] Guest: 2024-12-10 00:07:08.640009863 +0000 UTC Remote: 2024-12-10 00:07:08.549251378 +0000 UTC m=+75.795332018 (delta=90.758485ms)
	I1210 00:07:08.658764   97943 fix.go:200] guest clock delta is within tolerance: 90.758485ms
	I1210 00:07:08.658772   97943 start.go:83] releasing machines lock for "ha-070032-m02", held for 29.303735455s
	I1210 00:07:08.658798   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:08.659077   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetIP
	I1210 00:07:08.661426   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.661743   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.661779   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.663916   97943 out.go:177] * Found network options:
	I1210 00:07:08.665147   97943 out.go:177]   - NO_PROXY=192.168.39.187
	W1210 00:07:08.666190   97943 proxy.go:119] fail to check proxy env: Error ip not in block
	I1210 00:07:08.666213   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:08.666724   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:08.666867   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:08.666999   97943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 00:07:08.667045   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	W1210 00:07:08.667058   97943 proxy.go:119] fail to check proxy env: Error ip not in block
	I1210 00:07:08.667145   97943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 00:07:08.667170   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:08.669614   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.669829   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.669978   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.670007   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.670104   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:08.670217   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.670241   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.670281   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.670437   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:08.670446   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:08.670629   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.670648   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa Username:docker}
	I1210 00:07:08.670779   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:08.670926   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa Username:docker}
	I1210 00:07:08.901492   97943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 00:07:08.907747   97943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 00:07:08.907817   97943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 00:07:08.923205   97943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 00:07:08.923229   97943 start.go:495] detecting cgroup driver to use...
	I1210 00:07:08.923295   97943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 00:07:08.937553   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 00:07:08.950281   97943 docker.go:217] disabling cri-docker service (if available) ...
	I1210 00:07:08.950346   97943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 00:07:08.962860   97943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 00:07:08.975314   97943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 00:07:09.086709   97943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 00:07:09.237022   97943 docker.go:233] disabling docker service ...
	I1210 00:07:09.237103   97943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 00:07:09.249910   97943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 00:07:09.261842   97943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 00:07:09.377487   97943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 00:07:09.489077   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 00:07:09.503310   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 00:07:09.520074   97943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 00:07:09.520146   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.529237   97943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 00:07:09.529299   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.538814   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.547790   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.557022   97943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 00:07:09.566274   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.575677   97943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.591166   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.600226   97943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 00:07:09.608899   97943 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 00:07:09.608959   97943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 00:07:09.621054   97943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 00:07:09.630324   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:07:09.745895   97943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 00:07:09.836812   97943 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 00:07:09.836886   97943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 00:07:09.841320   97943 start.go:563] Will wait 60s for crictl version
	I1210 00:07:09.841380   97943 ssh_runner.go:195] Run: which crictl
	I1210 00:07:09.845003   97943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:07:09.887045   97943 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 00:07:09.887158   97943 ssh_runner.go:195] Run: crio --version
	I1210 00:07:09.913628   97943 ssh_runner.go:195] Run: crio --version
	I1210 00:07:09.940544   97943 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 00:07:09.941808   97943 out.go:177]   - env NO_PROXY=192.168.39.187
	I1210 00:07:09.942959   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetIP
	I1210 00:07:09.945644   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:09.946026   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:09.946058   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:09.946322   97943 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 00:07:09.950215   97943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:07:09.961995   97943 mustload.go:65] Loading cluster: ha-070032
	I1210 00:07:09.962176   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:07:09.962427   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:07:09.962471   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:07:09.977140   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34015
	I1210 00:07:09.977521   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:07:09.978002   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:07:09.978024   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:07:09.978339   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:07:09.978526   97943 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:07:09.979937   97943 host.go:66] Checking if "ha-070032" exists ...
	I1210 00:07:09.980239   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:07:09.980281   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:07:09.994247   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36787
	I1210 00:07:09.994760   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:07:09.995248   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:07:09.995276   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:07:09.995617   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:07:09.995804   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:07:09.995981   97943 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032 for IP: 192.168.39.198
	I1210 00:07:09.995996   97943 certs.go:194] generating shared ca certs ...
	I1210 00:07:09.996013   97943 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:07:09.996181   97943 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 00:07:09.996237   97943 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 00:07:09.996250   97943 certs.go:256] generating profile certs ...
	I1210 00:07:09.996340   97943 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key
	I1210 00:07:09.996369   97943 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.f9753880
	I1210 00:07:09.996386   97943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.f9753880 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.187 192.168.39.198 192.168.39.254]
	I1210 00:07:10.076485   97943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.f9753880 ...
	I1210 00:07:10.076513   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.f9753880: {Name:mk063fa61de97dbebc815f8cdc0b8ad5f6ad42dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:07:10.076683   97943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.f9753880 ...
	I1210 00:07:10.076697   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.f9753880: {Name:mk6197070a633b3c7bff009f36273929319901d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:07:10.076768   97943 certs.go:381] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.f9753880 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt
	I1210 00:07:10.076894   97943 certs.go:385] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.f9753880 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key
	I1210 00:07:10.077019   97943 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key
	I1210 00:07:10.077036   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 00:07:10.077051   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 00:07:10.077064   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 00:07:10.077079   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 00:07:10.077092   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 00:07:10.077105   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 00:07:10.077118   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 00:07:10.077130   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 00:07:10.077177   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 00:07:10.077207   97943 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 00:07:10.077219   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:07:10.077240   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 00:07:10.077261   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:07:10.077283   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 00:07:10.077318   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:07:10.077343   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:07:10.077356   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem -> /usr/share/ca-certificates/86296.pem
	I1210 00:07:10.077368   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /usr/share/ca-certificates/862962.pem
	I1210 00:07:10.077402   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:07:10.080314   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:07:10.080656   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:07:10.080686   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:07:10.080849   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:07:10.081053   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:07:10.081213   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:07:10.081346   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:07:10.150955   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1210 00:07:10.156109   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1210 00:07:10.172000   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1210 00:07:10.175843   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1210 00:07:10.191569   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1210 00:07:10.195845   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1210 00:07:10.205344   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1210 00:07:10.208990   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1210 00:07:10.218513   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1210 00:07:10.222172   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1210 00:07:10.231444   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1210 00:07:10.235751   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1210 00:07:10.245673   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:07:10.268586   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 00:07:10.289301   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:07:10.309755   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:07:10.330372   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1210 00:07:10.350734   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 00:07:10.370944   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:07:10.391160   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:07:10.411354   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:07:10.431480   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 00:07:10.453051   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 00:07:10.473317   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1210 00:07:10.487731   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1210 00:07:10.501999   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1210 00:07:10.516876   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1210 00:07:10.531860   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1210 00:07:10.546723   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1210 00:07:10.561653   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1210 00:07:10.575903   97943 ssh_runner.go:195] Run: openssl version
	I1210 00:07:10.580966   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 00:07:10.590633   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 00:07:10.594516   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 00:07:10.594555   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 00:07:10.599765   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:07:10.609423   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:07:10.619123   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:07:10.623118   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:07:10.623159   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:07:10.628240   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:07:10.637834   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 00:07:10.647418   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 00:07:10.651160   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 00:07:10.651204   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 00:07:10.656233   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 00:07:10.666013   97943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:07:10.669458   97943 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 00:07:10.669508   97943 kubeadm.go:934] updating node {m02 192.168.39.198 8443 v1.31.2 crio true true} ...
	I1210 00:07:10.669598   97943 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-070032-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:07:10.669628   97943 kube-vip.go:115] generating kube-vip config ...
	I1210 00:07:10.669651   97943 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1210 00:07:10.689973   97943 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1210 00:07:10.690046   97943 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1210 00:07:10.690097   97943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:07:10.699806   97943 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1210 00:07:10.699859   97943 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1210 00:07:10.709208   97943 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1210 00:07:10.709234   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1210 00:07:10.709289   97943 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1210 00:07:10.709322   97943 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1210 00:07:10.709296   97943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1210 00:07:10.713239   97943 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1210 00:07:10.713260   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1210 00:07:11.639149   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1210 00:07:11.639234   97943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1210 00:07:11.643871   97943 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1210 00:07:11.643902   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1210 00:07:11.758059   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:07:11.787926   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1210 00:07:11.788041   97943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1210 00:07:11.795093   97943 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1210 00:07:11.795140   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1210 00:07:12.180780   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1210 00:07:12.189342   97943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1210 00:07:12.205977   97943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:07:12.220614   97943 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1210 00:07:12.235844   97943 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1210 00:07:12.239089   97943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:07:12.251338   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:07:12.381143   97943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:07:12.396098   97943 host.go:66] Checking if "ha-070032" exists ...
	I1210 00:07:12.396594   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:07:12.396651   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:07:12.412619   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39619
	I1210 00:07:12.413166   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:07:12.413744   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:07:12.413766   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:07:12.414184   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:07:12.414391   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:07:12.414627   97943 start.go:317] joinCluster: &{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:07:12.414728   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1210 00:07:12.414747   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:07:12.418002   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:07:12.418418   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:07:12.418450   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:07:12.418629   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:07:12.418810   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:07:12.418994   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:07:12.419164   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:07:12.570827   97943 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:07:12.570886   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tdi3w2.l01zdw261ipf0ila --discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-070032-m02 --control-plane --apiserver-advertise-address=192.168.39.198 --apiserver-bind-port=8443"
	I1210 00:07:32.921639   97943 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tdi3w2.l01zdw261ipf0ila --discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-070032-m02 --control-plane --apiserver-advertise-address=192.168.39.198 --apiserver-bind-port=8443": (20.350728679s)
	I1210 00:07:32.921682   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1210 00:07:33.411739   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-070032-m02 minikube.k8s.io/updated_at=2024_12_10T00_07_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=ha-070032 minikube.k8s.io/primary=false
	I1210 00:07:33.552589   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-070032-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1210 00:07:33.681991   97943 start.go:319] duration metric: took 21.26735926s to joinCluster
	I1210 00:07:33.682079   97943 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:07:33.682486   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:07:33.683556   97943 out.go:177] * Verifying Kubernetes components...
	I1210 00:07:33.684723   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:07:33.911972   97943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:07:33.951142   97943 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:07:33.951400   97943 kapi.go:59] client config for ha-070032: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.crt", KeyFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key", CAFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1210 00:07:33.951471   97943 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.187:8443
	I1210 00:07:33.951667   97943 node_ready.go:35] waiting up to 6m0s for node "ha-070032-m02" to be "Ready" ...
	I1210 00:07:33.951780   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:33.951788   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:33.951796   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:33.951800   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:33.961739   97943 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1210 00:07:34.452167   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:34.452198   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:34.452211   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:34.452219   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:34.456196   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:34.952070   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:34.952094   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:34.952105   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:34.952111   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:34.957522   97943 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1210 00:07:35.452860   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:35.452883   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:35.452890   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:35.452894   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:35.456005   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:35.952021   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:35.952048   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:35.952058   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:35.952063   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:35.955318   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:35.955854   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:36.452184   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:36.452211   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:36.452222   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:36.452229   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:36.455126   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:36.951926   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:36.951955   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:36.951966   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:36.951973   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:36.956909   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:07:37.452305   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:37.452330   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:37.452341   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:37.452348   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:37.458679   97943 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1210 00:07:37.952074   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:37.952096   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:37.952105   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:37.952111   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:37.954863   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:38.452953   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:38.452983   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:38.452996   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:38.453003   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:38.455946   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:38.456796   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:38.952594   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:38.952617   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:38.952626   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:38.952630   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:38.955438   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:39.452632   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:39.452657   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:39.452669   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:39.452675   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:39.455716   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:39.952848   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:39.952879   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:39.952893   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:39.952899   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:39.956221   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:40.452071   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:40.452095   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:40.452105   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:40.452112   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:40.455375   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:40.952464   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:40.952488   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:40.952507   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:40.952512   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:40.955445   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:40.956051   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:41.452509   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:41.452534   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:41.452542   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:41.452547   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:41.455649   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:41.952634   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:41.952657   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:41.952666   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:41.952669   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:41.955344   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:42.452001   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:42.452023   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:42.452032   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:42.452036   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:42.454753   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:42.952401   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:42.952423   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:42.952436   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:42.952440   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:42.955178   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:43.451951   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:43.451974   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:43.451982   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:43.451986   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:43.454333   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:43.454867   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:43.951938   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:43.951963   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:43.951973   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:43.951978   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:43.954971   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:44.452196   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:44.452218   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:44.452225   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:44.452230   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:44.455145   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:44.952295   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:44.952319   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:44.952327   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:44.952331   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:44.955347   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:45.452137   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:45.452165   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:45.452176   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:45.452181   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:45.477510   97943 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I1210 00:07:45.477938   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:45.952299   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:45.952324   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:45.952332   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:45.952335   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:45.955321   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:46.452358   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:46.452384   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:46.452393   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:46.452397   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:46.455541   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:46.952608   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:46.952634   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:46.952643   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:46.952647   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:46.957412   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:07:47.452449   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:47.452471   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:47.452480   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:47.452484   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:47.455610   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:47.952117   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:47.952140   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:47.952153   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:47.952158   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:47.955292   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:47.956098   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:48.452506   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:48.452532   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:48.452539   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:48.452543   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:48.455102   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:48.952221   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:48.952248   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:48.952258   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:48.952265   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:48.955311   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:49.452304   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:49.452327   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:49.452335   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:49.452340   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:49.455564   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:49.952482   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:49.952504   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:49.952512   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:49.952516   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:49.955476   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:50.452216   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:50.452240   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:50.452248   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:50.452252   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:50.455231   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:50.455908   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:50.952301   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:50.952323   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:50.952331   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:50.952335   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:50.955916   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:51.452010   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:51.452030   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.452039   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.452042   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.454528   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.455097   97943 node_ready.go:49] node "ha-070032-m02" has status "Ready":"True"
	I1210 00:07:51.455120   97943 node_ready.go:38] duration metric: took 17.50342824s for node "ha-070032-m02" to be "Ready" ...
	I1210 00:07:51.455132   97943 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:07:51.455240   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:07:51.455254   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.455263   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.455267   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.459208   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:51.466339   97943 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fs6l6" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.466409   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs6l6
	I1210 00:07:51.466417   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.466423   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.466427   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.469050   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.469653   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:51.469667   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.469674   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.469678   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.472023   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.472637   97943 pod_ready.go:93] pod "coredns-7c65d6cfc9-fs6l6" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:51.472656   97943 pod_ready.go:82] duration metric: took 6.295928ms for pod "coredns-7c65d6cfc9-fs6l6" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.472667   97943 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nqnhw" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.472740   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nqnhw
	I1210 00:07:51.472751   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.472759   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.472768   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.475075   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.475717   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:51.475733   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.475739   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.475743   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.477769   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.478274   97943 pod_ready.go:93] pod "coredns-7c65d6cfc9-nqnhw" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:51.478291   97943 pod_ready.go:82] duration metric: took 5.614539ms for pod "coredns-7c65d6cfc9-nqnhw" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.478301   97943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.478367   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/etcd-ha-070032
	I1210 00:07:51.478379   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.478388   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.478394   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.480522   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.481177   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:51.481192   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.481202   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.481209   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.483181   97943 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1210 00:07:51.483658   97943 pod_ready.go:93] pod "etcd-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:51.483673   97943 pod_ready.go:82] duration metric: took 5.36618ms for pod "etcd-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.483680   97943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.483721   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/etcd-ha-070032-m02
	I1210 00:07:51.483729   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.483736   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.483740   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.485816   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.486281   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:51.486294   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.486301   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.486305   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.488586   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.489007   97943 pod_ready.go:93] pod "etcd-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:51.489022   97943 pod_ready.go:82] duration metric: took 5.33676ms for pod "etcd-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.489033   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.652421   97943 request.go:632] Waited for 163.314648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032
	I1210 00:07:51.652507   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032
	I1210 00:07:51.652514   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.652522   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.652529   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.655875   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:51.852945   97943 request.go:632] Waited for 196.352422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:51.853007   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:51.853013   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.853021   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.853024   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.855755   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.856291   97943 pod_ready.go:93] pod "kube-apiserver-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:51.856309   97943 pod_ready.go:82] duration metric: took 367.27061ms for pod "kube-apiserver-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.856319   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:52.052337   97943 request.go:632] Waited for 195.923221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032-m02
	I1210 00:07:52.052427   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032-m02
	I1210 00:07:52.052445   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:52.052456   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:52.052464   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:52.055099   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:52.252077   97943 request.go:632] Waited for 196.296135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:52.252149   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:52.252156   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:52.252167   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:52.252174   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:52.255050   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:52.255574   97943 pod_ready.go:93] pod "kube-apiserver-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:52.255594   97943 pod_ready.go:82] duration metric: took 399.267887ms for pod "kube-apiserver-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:52.255606   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:52.452073   97943 request.go:632] Waited for 196.39546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032
	I1210 00:07:52.452157   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032
	I1210 00:07:52.452173   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:52.452186   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:52.452244   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:52.458811   97943 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1210 00:07:52.652632   97943 request.go:632] Waited for 193.214443ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:52.652697   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:52.652702   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:52.652711   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:52.652716   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:52.655373   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:52.655983   97943 pod_ready.go:93] pod "kube-controller-manager-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:52.656003   97943 pod_ready.go:82] duration metric: took 400.387415ms for pod "kube-controller-manager-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:52.656017   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:52.852497   97943 request.go:632] Waited for 196.400538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032-m02
	I1210 00:07:52.852597   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032-m02
	I1210 00:07:52.852602   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:52.852610   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:52.852615   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:52.855857   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:53.052833   97943 request.go:632] Waited for 196.298843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:53.052897   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:53.052903   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:53.052910   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:53.052914   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:53.055870   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:53.056472   97943 pod_ready.go:93] pod "kube-controller-manager-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:53.056497   97943 pod_ready.go:82] duration metric: took 400.471759ms for pod "kube-controller-manager-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:53.056510   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7fm88" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:53.252421   97943 request.go:632] Waited for 195.828491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7fm88
	I1210 00:07:53.252528   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7fm88
	I1210 00:07:53.252541   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:53.252551   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:53.252557   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:53.255434   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:53.452445   97943 request.go:632] Waited for 196.391925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:53.452546   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:53.452560   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:53.452570   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:53.452575   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:53.456118   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:53.456572   97943 pod_ready.go:93] pod "kube-proxy-7fm88" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:53.456590   97943 pod_ready.go:82] duration metric: took 400.071362ms for pod "kube-proxy-7fm88" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:53.456605   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xsxdp" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:53.652799   97943 request.go:632] Waited for 196.033566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsxdp
	I1210 00:07:53.652870   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsxdp
	I1210 00:07:53.652877   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:53.652889   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:53.652897   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:53.656566   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:53.852630   97943 request.go:632] Waited for 195.347256ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:53.852735   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:53.852743   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:53.852750   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:53.852754   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:53.856029   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:53.856560   97943 pod_ready.go:93] pod "kube-proxy-xsxdp" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:53.856580   97943 pod_ready.go:82] duration metric: took 399.967291ms for pod "kube-proxy-xsxdp" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:53.856593   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:54.052778   97943 request.go:632] Waited for 196.074454ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032
	I1210 00:07:54.052856   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032
	I1210 00:07:54.052864   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:54.052876   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:54.052886   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:54.056269   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:54.252099   97943 request.go:632] Waited for 195.297548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:54.252166   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:54.252172   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:54.252179   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:54.252194   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:54.256109   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:54.256828   97943 pod_ready.go:93] pod "kube-scheduler-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:54.256845   97943 pod_ready.go:82] duration metric: took 400.243574ms for pod "kube-scheduler-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:54.256855   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:54.452369   97943 request.go:632] Waited for 195.428155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032-m02
	I1210 00:07:54.452450   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032-m02
	I1210 00:07:54.452455   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:54.452462   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:54.452469   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:54.455694   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:54.652684   97943 request.go:632] Waited for 196.354028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:54.652789   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:54.652798   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:54.652807   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:54.652815   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:54.655871   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:54.656329   97943 pod_ready.go:93] pod "kube-scheduler-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:54.656346   97943 pod_ready.go:82] duration metric: took 399.484539ms for pod "kube-scheduler-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:54.656357   97943 pod_ready.go:39] duration metric: took 3.201198757s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:07:54.656372   97943 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:07:54.656424   97943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:07:54.671199   97943 api_server.go:72] duration metric: took 20.989077821s to wait for apiserver process to appear ...
	I1210 00:07:54.671227   97943 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:07:54.671247   97943 api_server.go:253] Checking apiserver healthz at https://192.168.39.187:8443/healthz ...
	I1210 00:07:54.675276   97943 api_server.go:279] https://192.168.39.187:8443/healthz returned 200:
	ok
	I1210 00:07:54.675337   97943 round_trippers.go:463] GET https://192.168.39.187:8443/version
	I1210 00:07:54.675341   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:54.675349   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:54.675356   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:54.676142   97943 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1210 00:07:54.676268   97943 api_server.go:141] control plane version: v1.31.2
	I1210 00:07:54.676284   97943 api_server.go:131] duration metric: took 5.052294ms to wait for apiserver health ...
	I1210 00:07:54.676295   97943 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:07:54.852698   97943 request.go:632] Waited for 176.309011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:07:54.852754   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:07:54.852758   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:54.852767   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:54.852774   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:54.857339   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:07:54.861880   97943 system_pods.go:59] 17 kube-system pods found
	I1210 00:07:54.861907   97943 system_pods.go:61] "coredns-7c65d6cfc9-fs6l6" [9a1cf8b4-0a76-41a1-930f-1574e31db324] Running
	I1210 00:07:54.861912   97943 system_pods.go:61] "coredns-7c65d6cfc9-nqnhw" [2c81e85b-ea31-43b6-9467-97ff40b0b4a0] Running
	I1210 00:07:54.861916   97943 system_pods.go:61] "etcd-ha-070032" [db08368f-b4a3-4d4b-8863-3a2bef1f832b] Running
	I1210 00:07:54.861920   97943 system_pods.go:61] "etcd-ha-070032-m02" [22d5e0f8-0395-40fe-b734-1718a89c251a] Running
	I1210 00:07:54.861952   97943 system_pods.go:61] "kindnet-69btk" [23838518-3372-48bc-986e-e4688e0963bb] Running
	I1210 00:07:54.861962   97943 system_pods.go:61] "kindnet-r97q9" [566672d9-989b-4337-84f4-bafd5c70755f] Running
	I1210 00:07:54.861965   97943 system_pods.go:61] "kube-apiserver-ha-070032" [e06e8916-31d6-4690-bc97-ed55126af827] Running
	I1210 00:07:54.861969   97943 system_pods.go:61] "kube-apiserver-ha-070032-m02" [b2c543a7-2d54-44a4-9369-3115629d79bb] Running
	I1210 00:07:54.861972   97943 system_pods.go:61] "kube-controller-manager-ha-070032" [d01a145f-816f-41ce-83db-c81713564109] Running
	I1210 00:07:54.861979   97943 system_pods.go:61] "kube-controller-manager-ha-070032-m02" [d8af77ba-8496-4383-a8a8-387110a2a026] Running
	I1210 00:07:54.861982   97943 system_pods.go:61] "kube-proxy-7fm88" [e935cde6-5a4b-4387-93a9-26ca701c54ac] Running
	I1210 00:07:54.861985   97943 system_pods.go:61] "kube-proxy-xsxdp" [9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5] Running
	I1210 00:07:54.861988   97943 system_pods.go:61] "kube-scheduler-ha-070032" [28986558-f062-4f8e-9fb1-413a22311d7d] Running
	I1210 00:07:54.861992   97943 system_pods.go:61] "kube-scheduler-ha-070032-m02" [0600e010-0e55-44d9-ac1f-1e62c0139dd1] Running
	I1210 00:07:54.861997   97943 system_pods.go:61] "kube-vip-ha-070032" [dcecf230-f635-42a1-8fd4-567e3409d086] Running
	I1210 00:07:54.862000   97943 system_pods.go:61] "kube-vip-ha-070032-m02" [af656c29-7151-4ef7-8fa6-91187358671e] Running
	I1210 00:07:54.862003   97943 system_pods.go:61] "storage-provisioner" [920be345-6e4a-425f-bcde-0e5463fd92b7] Running
	I1210 00:07:54.862009   97943 system_pods.go:74] duration metric: took 185.705934ms to wait for pod list to return data ...
	I1210 00:07:54.862019   97943 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:07:55.052828   97943 request.go:632] Waited for 190.716484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/default/serviceaccounts
	I1210 00:07:55.052905   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/default/serviceaccounts
	I1210 00:07:55.052910   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:55.052920   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:55.052925   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:55.056476   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:55.056707   97943 default_sa.go:45] found service account: "default"
	I1210 00:07:55.056722   97943 default_sa.go:55] duration metric: took 194.697141ms for default service account to be created ...
	I1210 00:07:55.056734   97943 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:07:55.252140   97943 request.go:632] Waited for 195.318975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:07:55.252222   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:07:55.252228   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:55.252235   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:55.252246   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:55.256177   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:55.260950   97943 system_pods.go:86] 17 kube-system pods found
	I1210 00:07:55.260986   97943 system_pods.go:89] "coredns-7c65d6cfc9-fs6l6" [9a1cf8b4-0a76-41a1-930f-1574e31db324] Running
	I1210 00:07:55.260993   97943 system_pods.go:89] "coredns-7c65d6cfc9-nqnhw" [2c81e85b-ea31-43b6-9467-97ff40b0b4a0] Running
	I1210 00:07:55.260998   97943 system_pods.go:89] "etcd-ha-070032" [db08368f-b4a3-4d4b-8863-3a2bef1f832b] Running
	I1210 00:07:55.261002   97943 system_pods.go:89] "etcd-ha-070032-m02" [22d5e0f8-0395-40fe-b734-1718a89c251a] Running
	I1210 00:07:55.261005   97943 system_pods.go:89] "kindnet-69btk" [23838518-3372-48bc-986e-e4688e0963bb] Running
	I1210 00:07:55.261009   97943 system_pods.go:89] "kindnet-r97q9" [566672d9-989b-4337-84f4-bafd5c70755f] Running
	I1210 00:07:55.261013   97943 system_pods.go:89] "kube-apiserver-ha-070032" [e06e8916-31d6-4690-bc97-ed55126af827] Running
	I1210 00:07:55.261017   97943 system_pods.go:89] "kube-apiserver-ha-070032-m02" [b2c543a7-2d54-44a4-9369-3115629d79bb] Running
	I1210 00:07:55.261021   97943 system_pods.go:89] "kube-controller-manager-ha-070032" [d01a145f-816f-41ce-83db-c81713564109] Running
	I1210 00:07:55.261025   97943 system_pods.go:89] "kube-controller-manager-ha-070032-m02" [d8af77ba-8496-4383-a8a8-387110a2a026] Running
	I1210 00:07:55.261028   97943 system_pods.go:89] "kube-proxy-7fm88" [e935cde6-5a4b-4387-93a9-26ca701c54ac] Running
	I1210 00:07:55.261032   97943 system_pods.go:89] "kube-proxy-xsxdp" [9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5] Running
	I1210 00:07:55.261035   97943 system_pods.go:89] "kube-scheduler-ha-070032" [28986558-f062-4f8e-9fb1-413a22311d7d] Running
	I1210 00:07:55.261038   97943 system_pods.go:89] "kube-scheduler-ha-070032-m02" [0600e010-0e55-44d9-ac1f-1e62c0139dd1] Running
	I1210 00:07:55.261041   97943 system_pods.go:89] "kube-vip-ha-070032" [dcecf230-f635-42a1-8fd4-567e3409d086] Running
	I1210 00:07:55.261044   97943 system_pods.go:89] "kube-vip-ha-070032-m02" [af656c29-7151-4ef7-8fa6-91187358671e] Running
	I1210 00:07:55.261047   97943 system_pods.go:89] "storage-provisioner" [920be345-6e4a-425f-bcde-0e5463fd92b7] Running
	I1210 00:07:55.261054   97943 system_pods.go:126] duration metric: took 204.311621ms to wait for k8s-apps to be running ...
	I1210 00:07:55.261063   97943 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:07:55.261104   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:07:55.274767   97943 system_svc.go:56] duration metric: took 13.694234ms WaitForService to wait for kubelet
	I1210 00:07:55.274800   97943 kubeadm.go:582] duration metric: took 21.592682957s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:07:55.274820   97943 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:07:55.452205   97943 request.go:632] Waited for 177.292861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes
	I1210 00:07:55.452266   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes
	I1210 00:07:55.452271   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:55.452278   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:55.452283   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:55.455802   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:55.456649   97943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:07:55.456674   97943 node_conditions.go:123] node cpu capacity is 2
	I1210 00:07:55.456687   97943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:07:55.456691   97943 node_conditions.go:123] node cpu capacity is 2
	I1210 00:07:55.456696   97943 node_conditions.go:105] duration metric: took 181.87045ms to run NodePressure ...
	I1210 00:07:55.456708   97943 start.go:241] waiting for startup goroutines ...
	I1210 00:07:55.456739   97943 start.go:255] writing updated cluster config ...
	I1210 00:07:55.458841   97943 out.go:201] 
	I1210 00:07:55.460254   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:07:55.460350   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:07:55.461990   97943 out.go:177] * Starting "ha-070032-m03" control-plane node in "ha-070032" cluster
	I1210 00:07:55.463162   97943 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:07:55.463187   97943 cache.go:56] Caching tarball of preloaded images
	I1210 00:07:55.463285   97943 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 00:07:55.463296   97943 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 00:07:55.463384   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:07:55.463555   97943 start.go:360] acquireMachinesLock for ha-070032-m03: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:07:55.463598   97943 start.go:364] duration metric: took 23.179µs to acquireMachinesLock for "ha-070032-m03"
	I1210 00:07:55.463615   97943 start.go:93] Provisioning new machine with config: &{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:07:55.463708   97943 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1210 00:07:55.465955   97943 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1210 00:07:55.466061   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:07:55.466099   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:07:55.482132   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
	I1210 00:07:55.482649   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:07:55.483189   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:07:55.483214   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:07:55.483546   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:07:55.483725   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetMachineName
	I1210 00:07:55.483847   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:07:55.483970   97943 start.go:159] libmachine.API.Create for "ha-070032" (driver="kvm2")
	I1210 00:07:55.484001   97943 client.go:168] LocalClient.Create starting
	I1210 00:07:55.484030   97943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem
	I1210 00:07:55.484063   97943 main.go:141] libmachine: Decoding PEM data...
	I1210 00:07:55.484076   97943 main.go:141] libmachine: Parsing certificate...
	I1210 00:07:55.484129   97943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem
	I1210 00:07:55.484150   97943 main.go:141] libmachine: Decoding PEM data...
	I1210 00:07:55.484160   97943 main.go:141] libmachine: Parsing certificate...
	I1210 00:07:55.484177   97943 main.go:141] libmachine: Running pre-create checks...
	I1210 00:07:55.484187   97943 main.go:141] libmachine: (ha-070032-m03) Calling .PreCreateCheck
	I1210 00:07:55.484346   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetConfigRaw
	I1210 00:07:55.484732   97943 main.go:141] libmachine: Creating machine...
	I1210 00:07:55.484749   97943 main.go:141] libmachine: (ha-070032-m03) Calling .Create
	I1210 00:07:55.484892   97943 main.go:141] libmachine: (ha-070032-m03) Creating KVM machine...
	I1210 00:07:55.486009   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found existing default KVM network
	I1210 00:07:55.486135   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found existing private KVM network mk-ha-070032
	I1210 00:07:55.486275   97943 main.go:141] libmachine: (ha-070032-m03) Setting up store path in /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03 ...
	I1210 00:07:55.486315   97943 main.go:141] libmachine: (ha-070032-m03) Building disk image from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1210 00:07:55.486369   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:55.486273   98753 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:07:55.486441   97943 main.go:141] libmachine: (ha-070032-m03) Downloading /home/jenkins/minikube-integration/20062-79135/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1210 00:07:55.750942   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:55.750806   98753 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa...
	I1210 00:07:55.823142   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:55.822993   98753 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/ha-070032-m03.rawdisk...
	I1210 00:07:55.823184   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Writing magic tar header
	I1210 00:07:55.823200   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Writing SSH key tar header
	I1210 00:07:55.823214   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:55.823115   98753 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03 ...
	I1210 00:07:55.823231   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03
	I1210 00:07:55.823252   97943 main.go:141] libmachine: (ha-070032-m03) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03 (perms=drwx------)
	I1210 00:07:55.823278   97943 main.go:141] libmachine: (ha-070032-m03) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines (perms=drwxr-xr-x)
	I1210 00:07:55.823337   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines
	I1210 00:07:55.823363   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:07:55.823375   97943 main.go:141] libmachine: (ha-070032-m03) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube (perms=drwxr-xr-x)
	I1210 00:07:55.823392   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135
	I1210 00:07:55.823405   97943 main.go:141] libmachine: (ha-070032-m03) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135 (perms=drwxrwxr-x)
	I1210 00:07:55.823415   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1210 00:07:55.823431   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home/jenkins
	I1210 00:07:55.823442   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home
	I1210 00:07:55.823456   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Skipping /home - not owner
	I1210 00:07:55.823471   97943 main.go:141] libmachine: (ha-070032-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 00:07:55.823488   97943 main.go:141] libmachine: (ha-070032-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 00:07:55.823501   97943 main.go:141] libmachine: (ha-070032-m03) Creating domain...
	I1210 00:07:55.824547   97943 main.go:141] libmachine: (ha-070032-m03) define libvirt domain using xml: 
	I1210 00:07:55.824562   97943 main.go:141] libmachine: (ha-070032-m03) <domain type='kvm'>
	I1210 00:07:55.824568   97943 main.go:141] libmachine: (ha-070032-m03)   <name>ha-070032-m03</name>
	I1210 00:07:55.824572   97943 main.go:141] libmachine: (ha-070032-m03)   <memory unit='MiB'>2200</memory>
	I1210 00:07:55.824578   97943 main.go:141] libmachine: (ha-070032-m03)   <vcpu>2</vcpu>
	I1210 00:07:55.824582   97943 main.go:141] libmachine: (ha-070032-m03)   <features>
	I1210 00:07:55.824588   97943 main.go:141] libmachine: (ha-070032-m03)     <acpi/>
	I1210 00:07:55.824594   97943 main.go:141] libmachine: (ha-070032-m03)     <apic/>
	I1210 00:07:55.824599   97943 main.go:141] libmachine: (ha-070032-m03)     <pae/>
	I1210 00:07:55.824605   97943 main.go:141] libmachine: (ha-070032-m03)     
	I1210 00:07:55.824615   97943 main.go:141] libmachine: (ha-070032-m03)   </features>
	I1210 00:07:55.824649   97943 main.go:141] libmachine: (ha-070032-m03)   <cpu mode='host-passthrough'>
	I1210 00:07:55.824662   97943 main.go:141] libmachine: (ha-070032-m03)   
	I1210 00:07:55.824670   97943 main.go:141] libmachine: (ha-070032-m03)   </cpu>
	I1210 00:07:55.824678   97943 main.go:141] libmachine: (ha-070032-m03)   <os>
	I1210 00:07:55.824685   97943 main.go:141] libmachine: (ha-070032-m03)     <type>hvm</type>
	I1210 00:07:55.824690   97943 main.go:141] libmachine: (ha-070032-m03)     <boot dev='cdrom'/>
	I1210 00:07:55.824697   97943 main.go:141] libmachine: (ha-070032-m03)     <boot dev='hd'/>
	I1210 00:07:55.824703   97943 main.go:141] libmachine: (ha-070032-m03)     <bootmenu enable='no'/>
	I1210 00:07:55.824709   97943 main.go:141] libmachine: (ha-070032-m03)   </os>
	I1210 00:07:55.824714   97943 main.go:141] libmachine: (ha-070032-m03)   <devices>
	I1210 00:07:55.824720   97943 main.go:141] libmachine: (ha-070032-m03)     <disk type='file' device='cdrom'>
	I1210 00:07:55.824728   97943 main.go:141] libmachine: (ha-070032-m03)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/boot2docker.iso'/>
	I1210 00:07:55.824735   97943 main.go:141] libmachine: (ha-070032-m03)       <target dev='hdc' bus='scsi'/>
	I1210 00:07:55.824740   97943 main.go:141] libmachine: (ha-070032-m03)       <readonly/>
	I1210 00:07:55.824746   97943 main.go:141] libmachine: (ha-070032-m03)     </disk>
	I1210 00:07:55.824753   97943 main.go:141] libmachine: (ha-070032-m03)     <disk type='file' device='disk'>
	I1210 00:07:55.824761   97943 main.go:141] libmachine: (ha-070032-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1210 00:07:55.824769   97943 main.go:141] libmachine: (ha-070032-m03)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/ha-070032-m03.rawdisk'/>
	I1210 00:07:55.824776   97943 main.go:141] libmachine: (ha-070032-m03)       <target dev='hda' bus='virtio'/>
	I1210 00:07:55.824780   97943 main.go:141] libmachine: (ha-070032-m03)     </disk>
	I1210 00:07:55.824787   97943 main.go:141] libmachine: (ha-070032-m03)     <interface type='network'>
	I1210 00:07:55.824793   97943 main.go:141] libmachine: (ha-070032-m03)       <source network='mk-ha-070032'/>
	I1210 00:07:55.824799   97943 main.go:141] libmachine: (ha-070032-m03)       <model type='virtio'/>
	I1210 00:07:55.824804   97943 main.go:141] libmachine: (ha-070032-m03)     </interface>
	I1210 00:07:55.824809   97943 main.go:141] libmachine: (ha-070032-m03)     <interface type='network'>
	I1210 00:07:55.824814   97943 main.go:141] libmachine: (ha-070032-m03)       <source network='default'/>
	I1210 00:07:55.824819   97943 main.go:141] libmachine: (ha-070032-m03)       <model type='virtio'/>
	I1210 00:07:55.824824   97943 main.go:141] libmachine: (ha-070032-m03)     </interface>
	I1210 00:07:55.824830   97943 main.go:141] libmachine: (ha-070032-m03)     <serial type='pty'>
	I1210 00:07:55.824835   97943 main.go:141] libmachine: (ha-070032-m03)       <target port='0'/>
	I1210 00:07:55.824842   97943 main.go:141] libmachine: (ha-070032-m03)     </serial>
	I1210 00:07:55.824846   97943 main.go:141] libmachine: (ha-070032-m03)     <console type='pty'>
	I1210 00:07:55.824852   97943 main.go:141] libmachine: (ha-070032-m03)       <target type='serial' port='0'/>
	I1210 00:07:55.824859   97943 main.go:141] libmachine: (ha-070032-m03)     </console>
	I1210 00:07:55.824863   97943 main.go:141] libmachine: (ha-070032-m03)     <rng model='virtio'>
	I1210 00:07:55.824871   97943 main.go:141] libmachine: (ha-070032-m03)       <backend model='random'>/dev/random</backend>
	I1210 00:07:55.824874   97943 main.go:141] libmachine: (ha-070032-m03)     </rng>
	I1210 00:07:55.824881   97943 main.go:141] libmachine: (ha-070032-m03)     
	I1210 00:07:55.824884   97943 main.go:141] libmachine: (ha-070032-m03)     
	I1210 00:07:55.824891   97943 main.go:141] libmachine: (ha-070032-m03)   </devices>
	I1210 00:07:55.824895   97943 main.go:141] libmachine: (ha-070032-m03) </domain>
	I1210 00:07:55.824901   97943 main.go:141] libmachine: (ha-070032-m03) 
	I1210 00:07:55.831443   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:5a:d9:d9 in network default
	I1210 00:07:55.832042   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:55.832057   97943 main.go:141] libmachine: (ha-070032-m03) Ensuring networks are active...
	I1210 00:07:55.832934   97943 main.go:141] libmachine: (ha-070032-m03) Ensuring network default is active
	I1210 00:07:55.833292   97943 main.go:141] libmachine: (ha-070032-m03) Ensuring network mk-ha-070032 is active
	I1210 00:07:55.833793   97943 main.go:141] libmachine: (ha-070032-m03) Getting domain xml...
	I1210 00:07:55.834538   97943 main.go:141] libmachine: (ha-070032-m03) Creating domain...
	I1210 00:07:57.048312   97943 main.go:141] libmachine: (ha-070032-m03) Waiting to get IP...
	I1210 00:07:57.049343   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:57.049867   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:57.049936   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:57.049857   98753 retry.go:31] will retry after 285.89703ms: waiting for machine to come up
	I1210 00:07:57.337509   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:57.337895   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:57.337921   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:57.337875   98753 retry.go:31] will retry after 339.218188ms: waiting for machine to come up
	I1210 00:07:57.678323   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:57.678856   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:57.678881   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:57.678806   98753 retry.go:31] will retry after 294.170833ms: waiting for machine to come up
	I1210 00:07:57.974134   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:57.974660   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:57.974681   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:57.974611   98753 retry.go:31] will retry after 408.745882ms: waiting for machine to come up
	I1210 00:07:58.385123   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:58.385636   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:58.385664   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:58.385591   98753 retry.go:31] will retry after 527.821664ms: waiting for machine to come up
	I1210 00:07:58.915568   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:58.916006   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:58.916035   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:58.915961   98753 retry.go:31] will retry after 925.585874ms: waiting for machine to come up
	I1210 00:07:59.843180   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:59.843652   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:59.843679   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:59.843610   98753 retry.go:31] will retry after 870.720245ms: waiting for machine to come up
	I1210 00:08:00.715984   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:00.716446   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:00.716472   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:00.716425   98753 retry.go:31] will retry after 1.331743311s: waiting for machine to come up
	I1210 00:08:02.049640   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:02.050041   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:02.050067   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:02.049985   98753 retry.go:31] will retry after 1.76199987s: waiting for machine to come up
	I1210 00:08:03.813933   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:03.814414   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:03.814439   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:03.814370   98753 retry.go:31] will retry after 1.980303699s: waiting for machine to come up
	I1210 00:08:05.796494   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:05.797056   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:05.797086   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:05.797021   98753 retry.go:31] will retry after 2.086128516s: waiting for machine to come up
	I1210 00:08:07.884316   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:07.884692   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:07.884721   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:07.884642   98753 retry.go:31] will retry after 2.780301455s: waiting for machine to come up
	I1210 00:08:10.666546   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:10.666965   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:10.666996   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:10.666924   98753 retry.go:31] will retry after 4.142573793s: waiting for machine to come up
	I1210 00:08:14.811574   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:14.811965   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:14.811989   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:14.811918   98753 retry.go:31] will retry after 5.321214881s: waiting for machine to come up
	I1210 00:08:20.135607   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.136014   97943 main.go:141] libmachine: (ha-070032-m03) Found IP for machine: 192.168.39.244
	I1210 00:08:20.136038   97943 main.go:141] libmachine: (ha-070032-m03) Reserving static IP address...
	I1210 00:08:20.136048   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has current primary IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.136451   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find host DHCP lease matching {name: "ha-070032-m03", mac: "52:54:00:36:e7:81", ip: "192.168.39.244"} in network mk-ha-070032
	I1210 00:08:20.209941   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Getting to WaitForSSH function...
	I1210 00:08:20.209976   97943 main.go:141] libmachine: (ha-070032-m03) Reserved static IP address: 192.168.39.244
	I1210 00:08:20.209989   97943 main.go:141] libmachine: (ha-070032-m03) Waiting for SSH to be available...
	I1210 00:08:20.212879   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.213267   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:minikube Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.213298   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.213460   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Using SSH client type: external
	I1210 00:08:20.213487   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa (-rw-------)
	I1210 00:08:20.213527   97943 main.go:141] libmachine: (ha-070032-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:08:20.213547   97943 main.go:141] libmachine: (ha-070032-m03) DBG | About to run SSH command:
	I1210 00:08:20.213584   97943 main.go:141] libmachine: (ha-070032-m03) DBG | exit 0
	I1210 00:08:20.342480   97943 main.go:141] libmachine: (ha-070032-m03) DBG | SSH cmd err, output: <nil>: 
	I1210 00:08:20.342791   97943 main.go:141] libmachine: (ha-070032-m03) KVM machine creation complete!
	I1210 00:08:20.343090   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetConfigRaw
	I1210 00:08:20.343678   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:20.343881   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:20.344092   97943 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1210 00:08:20.344125   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetState
	I1210 00:08:20.345413   97943 main.go:141] libmachine: Detecting operating system of created instance...
	I1210 00:08:20.345430   97943 main.go:141] libmachine: Waiting for SSH to be available...
	I1210 00:08:20.345437   97943 main.go:141] libmachine: Getting to WaitForSSH function...
	I1210 00:08:20.345450   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:20.347967   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.348355   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.348389   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.348481   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:20.348653   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.348776   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.348911   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:20.349041   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:08:20.349329   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1210 00:08:20.349348   97943 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1210 00:08:20.449562   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:08:20.449588   97943 main.go:141] libmachine: Detecting the provisioner...
	I1210 00:08:20.449598   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:20.452398   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.452785   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.452812   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.452941   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:20.453110   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.453240   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.453428   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:20.453598   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:08:20.453780   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1210 00:08:20.453798   97943 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1210 00:08:20.555272   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1210 00:08:20.555337   97943 main.go:141] libmachine: found compatible host: buildroot
	I1210 00:08:20.555348   97943 main.go:141] libmachine: Provisioning with buildroot...
	I1210 00:08:20.555362   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetMachineName
	I1210 00:08:20.555624   97943 buildroot.go:166] provisioning hostname "ha-070032-m03"
	I1210 00:08:20.555652   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetMachineName
	I1210 00:08:20.555844   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:20.558784   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.559157   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.559192   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.559357   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:20.559555   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.559716   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.559850   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:20.560050   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:08:20.560266   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1210 00:08:20.560285   97943 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-070032-m03 && echo "ha-070032-m03" | sudo tee /etc/hostname
	I1210 00:08:20.676771   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-070032-m03
	
	I1210 00:08:20.676807   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:20.679443   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.679776   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.679807   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.680006   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:20.680185   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.680359   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.680491   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:20.680620   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:08:20.680832   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1210 00:08:20.680847   97943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-070032-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-070032-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-070032-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 00:08:20.791291   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:08:20.791325   97943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 00:08:20.791341   97943 buildroot.go:174] setting up certificates
	I1210 00:08:20.791358   97943 provision.go:84] configureAuth start
	I1210 00:08:20.791370   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetMachineName
	I1210 00:08:20.791652   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetIP
	I1210 00:08:20.794419   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.794874   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.794902   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.795002   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:20.798177   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.798590   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.798619   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.798789   97943 provision.go:143] copyHostCerts
	I1210 00:08:20.798825   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:08:20.798862   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 00:08:20.798871   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:08:20.798934   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 00:08:20.799007   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:08:20.799025   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 00:08:20.799030   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:08:20.799053   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 00:08:20.799097   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:08:20.799112   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 00:08:20.799119   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:08:20.799140   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 00:08:20.799198   97943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.ha-070032-m03 san=[127.0.0.1 192.168.39.244 ha-070032-m03 localhost minikube]
	I1210 00:08:20.901770   97943 provision.go:177] copyRemoteCerts
	I1210 00:08:20.901829   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 00:08:20.901857   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:20.904479   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.904810   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.904842   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.904999   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:20.905202   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.905341   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:20.905465   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa Username:docker}
	I1210 00:08:20.987981   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 00:08:20.988061   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 00:08:21.011122   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 00:08:21.011186   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 00:08:21.033692   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 00:08:21.033754   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 00:08:21.056597   97943 provision.go:87] duration metric: took 265.223032ms to configureAuth
	I1210 00:08:21.056629   97943 buildroot.go:189] setting minikube options for container-runtime
	I1210 00:08:21.057591   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:08:21.057673   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:21.060831   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.061343   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.061378   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.061673   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:21.061904   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.062107   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.062269   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:21.062474   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:08:21.062700   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1210 00:08:21.062721   97943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 00:08:21.281273   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 00:08:21.281301   97943 main.go:141] libmachine: Checking connection to Docker...
	I1210 00:08:21.281310   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetURL
	I1210 00:08:21.282833   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Using libvirt version 6000000
	I1210 00:08:21.285219   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.285581   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.285613   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.285747   97943 main.go:141] libmachine: Docker is up and running!
	I1210 00:08:21.285761   97943 main.go:141] libmachine: Reticulating splines...
	I1210 00:08:21.285769   97943 client.go:171] duration metric: took 25.801757929s to LocalClient.Create
	I1210 00:08:21.285791   97943 start.go:167] duration metric: took 25.801831678s to libmachine.API.Create "ha-070032"
	I1210 00:08:21.285798   97943 start.go:293] postStartSetup for "ha-070032-m03" (driver="kvm2")
	I1210 00:08:21.285807   97943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 00:08:21.285828   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:21.286085   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 00:08:21.286117   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:21.288055   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.288329   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.288370   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.288480   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:21.288647   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.288777   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:21.288901   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa Username:docker}
	I1210 00:08:21.369391   97943 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 00:08:21.373285   97943 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 00:08:21.373310   97943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 00:08:21.373392   97943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 00:08:21.373503   97943 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 00:08:21.373518   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /etc/ssl/certs/862962.pem
	I1210 00:08:21.373639   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 00:08:21.382298   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:08:21.403806   97943 start.go:296] duration metric: took 117.996202ms for postStartSetup
	I1210 00:08:21.403863   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetConfigRaw
	I1210 00:08:21.404476   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetIP
	I1210 00:08:21.407162   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.407495   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.407517   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.407796   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:08:21.408029   97943 start.go:128] duration metric: took 25.944309943s to createHost
	I1210 00:08:21.408053   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:21.410158   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.410458   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.410486   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.410661   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:21.410839   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.411023   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.411142   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:21.411301   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:08:21.411462   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1210 00:08:21.411473   97943 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 00:08:21.514926   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733789301.493981402
	
	I1210 00:08:21.514949   97943 fix.go:216] guest clock: 1733789301.493981402
	I1210 00:08:21.514956   97943 fix.go:229] Guest: 2024-12-10 00:08:21.493981402 +0000 UTC Remote: 2024-12-10 00:08:21.408042688 +0000 UTC m=+148.654123328 (delta=85.938714ms)
	I1210 00:08:21.514972   97943 fix.go:200] guest clock delta is within tolerance: 85.938714ms
	I1210 00:08:21.514978   97943 start.go:83] releasing machines lock for "ha-070032-m03", held for 26.05137115s
	I1210 00:08:21.514997   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:21.515241   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetIP
	I1210 00:08:21.517912   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.518241   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.518261   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.520470   97943 out.go:177] * Found network options:
	I1210 00:08:21.521800   97943 out.go:177]   - NO_PROXY=192.168.39.187,192.168.39.198
	W1210 00:08:21.523143   97943 proxy.go:119] fail to check proxy env: Error ip not in block
	W1210 00:08:21.523168   97943 proxy.go:119] fail to check proxy env: Error ip not in block
	I1210 00:08:21.523188   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:21.523682   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:21.523924   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:21.524029   97943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 00:08:21.524084   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	W1210 00:08:21.524110   97943 proxy.go:119] fail to check proxy env: Error ip not in block
	W1210 00:08:21.524137   97943 proxy.go:119] fail to check proxy env: Error ip not in block
	I1210 00:08:21.524228   97943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 00:08:21.524251   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:21.527134   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.527403   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.527435   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.527461   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.527644   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:21.527864   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.527884   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.527885   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.528014   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:21.528094   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:21.528182   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.528256   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa Username:docker}
	I1210 00:08:21.528295   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:21.528396   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa Username:docker}
	I1210 00:08:21.759543   97943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 00:08:21.765842   97943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 00:08:21.765945   97943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 00:08:21.781497   97943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 00:08:21.781528   97943 start.go:495] detecting cgroup driver to use...
	I1210 00:08:21.781601   97943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 00:08:21.798260   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 00:08:21.812631   97943 docker.go:217] disabling cri-docker service (if available) ...
	I1210 00:08:21.812703   97943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 00:08:21.826291   97943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 00:08:21.839819   97943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 00:08:21.970011   97943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 00:08:22.106825   97943 docker.go:233] disabling docker service ...
	I1210 00:08:22.106898   97943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 00:08:22.120845   97943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 00:08:22.133078   97943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 00:08:22.277754   97943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 00:08:22.396135   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 00:08:22.410691   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 00:08:22.428016   97943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 00:08:22.428081   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.437432   97943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 00:08:22.437492   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.446807   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.457081   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.466785   97943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 00:08:22.476232   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.485876   97943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.501168   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.511414   97943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 00:08:22.520354   97943 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 00:08:22.520415   97943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 00:08:22.532412   97943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 00:08:22.541467   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:08:22.650142   97943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 00:08:22.739814   97943 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 00:08:22.739908   97943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 00:08:22.744756   97943 start.go:563] Will wait 60s for crictl version
	I1210 00:08:22.744820   97943 ssh_runner.go:195] Run: which crictl
	I1210 00:08:22.748420   97943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:08:22.786505   97943 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 00:08:22.786627   97943 ssh_runner.go:195] Run: crio --version
	I1210 00:08:22.812591   97943 ssh_runner.go:195] Run: crio --version
	I1210 00:08:22.840186   97943 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 00:08:22.841668   97943 out.go:177]   - env NO_PROXY=192.168.39.187
	I1210 00:08:22.842917   97943 out.go:177]   - env NO_PROXY=192.168.39.187,192.168.39.198
	I1210 00:08:22.843965   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetIP
	I1210 00:08:22.846623   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:22.847074   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:22.847104   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:22.847299   97943 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 00:08:22.851246   97943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:08:22.863976   97943 mustload.go:65] Loading cluster: ha-070032
	I1210 00:08:22.864213   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:08:22.864497   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:08:22.864537   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:08:22.879688   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33619
	I1210 00:08:22.880163   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:08:22.880674   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:08:22.880695   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:08:22.880999   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:08:22.881201   97943 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:08:22.882501   97943 host.go:66] Checking if "ha-070032" exists ...
	I1210 00:08:22.882829   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:08:22.882872   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:08:22.897175   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
	I1210 00:08:22.897634   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:08:22.898146   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:08:22.898164   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:08:22.898482   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:08:22.898668   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:08:22.898817   97943 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032 for IP: 192.168.39.244
	I1210 00:08:22.898832   97943 certs.go:194] generating shared ca certs ...
	I1210 00:08:22.898852   97943 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:08:22.899000   97943 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 00:08:22.899051   97943 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 00:08:22.899064   97943 certs.go:256] generating profile certs ...
	I1210 00:08:22.899170   97943 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key
	I1210 00:08:22.899201   97943 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.293befb8
	I1210 00:08:22.899223   97943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.293befb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.187 192.168.39.198 192.168.39.244 192.168.39.254]
	I1210 00:08:23.092450   97943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.293befb8 ...
	I1210 00:08:23.092478   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.293befb8: {Name:mk366065b18659314ca3f0bba1448963daaf0a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:08:23.092639   97943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.293befb8 ...
	I1210 00:08:23.092651   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.293befb8: {Name:mk5fa66078dcf45a83918146be6cef89d508f259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:08:23.092720   97943 certs.go:381] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.293befb8 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt
	I1210 00:08:23.092839   97943 certs.go:385] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.293befb8 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key
	I1210 00:08:23.092959   97943 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key
	I1210 00:08:23.092977   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 00:08:23.092992   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 00:08:23.093006   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 00:08:23.093017   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 00:08:23.093029   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 00:08:23.093041   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 00:08:23.093053   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 00:08:23.106669   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 00:08:23.106767   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 00:08:23.106812   97943 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 00:08:23.106826   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:08:23.106858   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 00:08:23.106887   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:08:23.106916   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 00:08:23.107014   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:08:23.107059   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:08:23.107078   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem -> /usr/share/ca-certificates/86296.pem
	I1210 00:08:23.107095   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /usr/share/ca-certificates/862962.pem
	I1210 00:08:23.107140   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:08:23.110428   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:08:23.110865   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:08:23.110897   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:08:23.111098   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:08:23.111299   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:08:23.111497   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:08:23.111654   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:08:23.182834   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1210 00:08:23.187460   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1210 00:08:23.201682   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1210 00:08:23.206212   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1210 00:08:23.216977   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1210 00:08:23.221040   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1210 00:08:23.231771   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1210 00:08:23.235936   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1210 00:08:23.245237   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1210 00:08:23.249225   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1210 00:08:23.259163   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1210 00:08:23.262970   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1210 00:08:23.272905   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:08:23.296036   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 00:08:23.319479   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:08:23.343697   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:08:23.365055   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1210 00:08:23.386745   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 00:08:23.408376   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:08:23.431761   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:08:23.453442   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:08:23.474461   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 00:08:23.496103   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 00:08:23.518047   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1210 00:08:23.533023   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1210 00:08:23.547698   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1210 00:08:23.563066   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1210 00:08:23.577579   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1210 00:08:23.592182   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1210 00:08:23.608125   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1210 00:08:23.627416   97943 ssh_runner.go:195] Run: openssl version
	I1210 00:08:23.632821   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:08:23.642458   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:08:23.646845   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:08:23.646909   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:08:23.652298   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:08:23.662442   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 00:08:23.672292   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 00:08:23.676158   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 00:08:23.676205   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 00:08:23.681586   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 00:08:23.691472   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 00:08:23.701487   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 00:08:23.705375   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 00:08:23.705413   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 00:08:23.710443   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:08:23.720294   97943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:08:23.723799   97943 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 00:08:23.723848   97943 kubeadm.go:934] updating node {m03 192.168.39.244 8443 v1.31.2 crio true true} ...
	I1210 00:08:23.723926   97943 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-070032-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:08:23.723949   97943 kube-vip.go:115] generating kube-vip config ...
	I1210 00:08:23.723977   97943 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1210 00:08:23.738685   97943 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1210 00:08:23.738750   97943 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1210 00:08:23.738796   97943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:08:23.747698   97943 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1210 00:08:23.747755   97943 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1210 00:08:23.756795   97943 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1210 00:08:23.756827   97943 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1210 00:08:23.756846   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:08:23.756856   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1210 00:08:23.756795   97943 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1210 00:08:23.756914   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1210 00:08:23.756945   97943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1210 00:08:23.756968   97943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1210 00:08:23.773755   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1210 00:08:23.773816   97943 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1210 00:08:23.773823   97943 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1210 00:08:23.773844   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1210 00:08:23.773877   97943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1210 00:08:23.773844   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1210 00:08:23.793177   97943 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1210 00:08:23.793213   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1210 00:08:24.557518   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1210 00:08:24.566776   97943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1210 00:08:24.582142   97943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:08:24.597144   97943 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1210 00:08:24.611549   97943 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1210 00:08:24.615055   97943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:08:24.625780   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:08:24.763770   97943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:08:24.783613   97943 host.go:66] Checking if "ha-070032" exists ...
	I1210 00:08:24.784058   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:08:24.784117   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:08:24.799970   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40537
	I1210 00:08:24.800574   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:08:24.801077   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:08:24.801104   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:08:24.801443   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:08:24.801614   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:08:24.801763   97943 start.go:317] joinCluster: &{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:08:24.801913   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1210 00:08:24.801952   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:08:24.804893   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:08:24.805288   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:08:24.805318   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:08:24.805470   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:08:24.805660   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:08:24.805792   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:08:24.805938   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:08:24.954369   97943 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:08:24.954415   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7o473f.weadhysgevqpchg6 --discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-070032-m03 --control-plane --apiserver-advertise-address=192.168.39.244 --apiserver-bind-port=8443"
	I1210 00:08:45.926879   97943 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7o473f.weadhysgevqpchg6 --discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-070032-m03 --control-plane --apiserver-advertise-address=192.168.39.244 --apiserver-bind-port=8443": (20.972431626s)
	I1210 00:08:45.926930   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1210 00:08:46.537890   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-070032-m03 minikube.k8s.io/updated_at=2024_12_10T00_08_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=ha-070032 minikube.k8s.io/primary=false
	I1210 00:08:46.678755   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-070032-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1210 00:08:46.787657   97943 start.go:319] duration metric: took 21.985888121s to joinCluster
	I1210 00:08:46.787759   97943 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:08:46.788166   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:08:46.789343   97943 out.go:177] * Verifying Kubernetes components...
	I1210 00:08:46.790511   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:08:47.024805   97943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:08:47.076330   97943 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:08:47.076598   97943 kapi.go:59] client config for ha-070032: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.crt", KeyFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key", CAFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1210 00:08:47.076672   97943 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.187:8443
	I1210 00:08:47.076938   97943 node_ready.go:35] waiting up to 6m0s for node "ha-070032-m03" to be "Ready" ...
	I1210 00:08:47.077046   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:47.077058   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:47.077068   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:47.077072   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:47.081152   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:08:47.577919   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:47.577942   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:47.577950   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:47.577954   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:47.581367   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:48.077920   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:48.077946   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:48.077954   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:48.077957   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:48.081478   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:48.578106   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:48.578131   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:48.578140   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:48.578145   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:48.581394   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:49.077995   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:49.078020   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:49.078028   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:49.078032   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:49.081191   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:49.081654   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:08:49.577520   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:49.577543   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:49.577568   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:49.577572   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:49.580973   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:50.077456   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:50.077483   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:50.077492   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:50.077497   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:50.083402   97943 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1210 00:08:50.577976   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:50.577999   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:50.578007   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:50.578010   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:50.580506   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:08:51.077330   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:51.077376   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:51.077386   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:51.077395   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:51.080649   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:51.577290   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:51.577326   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:51.577339   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:51.577349   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:51.580882   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:51.581750   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:08:52.077653   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:52.077675   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:52.077683   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:52.077687   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:52.080889   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:52.578159   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:52.578187   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:52.578198   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:52.578206   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:52.582757   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:08:53.078153   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:53.078177   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:53.078185   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:53.078189   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:53.081439   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:53.577299   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:53.577324   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:53.577333   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:53.577338   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:53.580510   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:54.077196   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:54.077220   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:54.077230   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:54.077236   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:54.083654   97943 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1210 00:08:54.084273   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:08:54.578076   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:54.578111   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:54.578119   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:54.578123   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:54.581723   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:55.077626   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:55.077648   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:55.077657   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:55.077660   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:55.081300   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:55.577841   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:55.577867   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:55.577877   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:55.577886   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:55.581081   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:56.078005   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:56.078027   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:56.078036   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:56.078039   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:56.081200   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:56.577743   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:56.577839   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:56.577862   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:56.577877   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:56.582190   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:08:56.583066   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:08:57.077440   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:57.077464   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:57.077472   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:57.077477   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:57.080605   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:57.577457   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:57.577484   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:57.577493   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:57.577503   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:57.580830   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:58.077293   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:58.077331   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:58.077344   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:58.077352   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:58.080511   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:58.577256   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:58.577282   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:58.577294   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:58.577299   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:58.580528   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:59.077895   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:59.077918   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:59.077926   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:59.077932   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:59.080996   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:59.081515   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:08:59.577418   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:59.577442   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:59.577450   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:59.577454   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:59.580861   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:00.077126   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:00.077149   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:00.077160   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:00.077166   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:00.080369   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:00.577334   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:00.577358   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:00.577369   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:00.577376   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:00.580424   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:01.077338   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:01.077364   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:01.077371   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:01.077375   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:01.080475   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:01.577333   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:01.577358   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:01.577371   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:01.577378   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:01.581002   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:01.581675   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:09:02.078158   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:02.078188   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:02.078197   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:02.078202   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:02.081520   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:02.577513   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:02.577534   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:02.577542   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:02.577548   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:02.580750   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:03.077225   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:03.077249   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:03.077258   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:03.077262   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:03.080188   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:03.577192   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:03.577225   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:03.577233   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:03.577238   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:03.579962   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:04.078167   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:04.078198   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:04.078207   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:04.078211   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:04.081272   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:04.081781   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:09:04.577794   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:04.577818   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:04.577826   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:04.577833   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:04.580810   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.077153   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:05.077175   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.077183   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.077189   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.080235   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:05.577566   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:05.577589   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.577597   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.577601   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.580616   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.581339   97943 node_ready.go:49] node "ha-070032-m03" has status "Ready":"True"
	I1210 00:09:05.581357   97943 node_ready.go:38] duration metric: took 18.504395192s for node "ha-070032-m03" to be "Ready" ...
	I1210 00:09:05.581372   97943 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:09:05.581447   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:09:05.581458   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.581465   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.581469   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.589597   97943 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1210 00:09:05.596462   97943 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fs6l6" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.596536   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs6l6
	I1210 00:09:05.596544   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.596551   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.596556   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.599226   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.599844   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:05.599860   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.599867   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.599871   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.602025   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.602633   97943 pod_ready.go:93] pod "coredns-7c65d6cfc9-fs6l6" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:05.602657   97943 pod_ready.go:82] duration metric: took 6.171823ms for pod "coredns-7c65d6cfc9-fs6l6" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.602669   97943 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nqnhw" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.602734   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nqnhw
	I1210 00:09:05.602745   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.602755   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.602759   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.605440   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.606129   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:05.606147   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.606157   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.606166   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.608461   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.608910   97943 pod_ready.go:93] pod "coredns-7c65d6cfc9-nqnhw" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:05.608928   97943 pod_ready.go:82] duration metric: took 6.250217ms for pod "coredns-7c65d6cfc9-nqnhw" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.608941   97943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.608999   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/etcd-ha-070032
	I1210 00:09:05.609009   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.609019   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.609029   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.611004   97943 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1210 00:09:05.611561   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:05.611577   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.611587   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.611591   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.613769   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.614248   97943 pod_ready.go:93] pod "etcd-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:05.614265   97943 pod_ready.go:82] duration metric: took 5.312355ms for pod "etcd-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.614275   97943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.614330   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/etcd-ha-070032-m02
	I1210 00:09:05.614341   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.614352   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.614362   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.616534   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.617151   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:05.617169   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.617188   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.617196   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.619058   97943 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1210 00:09:05.619439   97943 pod_ready.go:93] pod "etcd-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:05.619455   97943 pod_ready.go:82] duration metric: took 5.173011ms for pod "etcd-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.619463   97943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.777761   97943 request.go:632] Waited for 158.225465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/etcd-ha-070032-m03
	I1210 00:09:05.777859   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/etcd-ha-070032-m03
	I1210 00:09:05.777871   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.777881   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.777892   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.780968   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:05.978102   97943 request.go:632] Waited for 196.392006ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:05.978169   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:05.978176   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.978187   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.978209   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.981545   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:05.981978   97943 pod_ready.go:93] pod "etcd-ha-070032-m03" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:05.981997   97943 pod_ready.go:82] duration metric: took 362.528097ms for pod "etcd-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.982014   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:06.178303   97943 request.go:632] Waited for 196.186487ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032
	I1210 00:09:06.178366   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032
	I1210 00:09:06.178371   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:06.178384   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:06.178391   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:06.181153   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:06.378297   97943 request.go:632] Waited for 196.356871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:06.378357   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:06.378363   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:06.378371   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:06.378375   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:06.381593   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:06.382165   97943 pod_ready.go:93] pod "kube-apiserver-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:06.382184   97943 pod_ready.go:82] duration metric: took 400.160632ms for pod "kube-apiserver-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:06.382194   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:06.578291   97943 request.go:632] Waited for 195.993966ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032-m02
	I1210 00:09:06.578353   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032-m02
	I1210 00:09:06.578358   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:06.578366   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:06.578370   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:06.582418   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:09:06.777593   97943 request.go:632] Waited for 194.199077ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:06.777669   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:06.777674   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:06.777681   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:06.777686   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:06.780997   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:06.781681   97943 pod_ready.go:93] pod "kube-apiserver-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:06.781703   97943 pod_ready.go:82] duration metric: took 399.498231ms for pod "kube-apiserver-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:06.781713   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:06.977670   97943 request.go:632] Waited for 195.882184ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032-m03
	I1210 00:09:06.977738   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032-m03
	I1210 00:09:06.977758   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:06.977770   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:06.977778   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:06.981052   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:07.178250   97943 request.go:632] Waited for 196.370885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:07.178313   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:07.178319   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:07.178329   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:07.178338   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:07.182730   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:09:07.183284   97943 pod_ready.go:93] pod "kube-apiserver-ha-070032-m03" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:07.183306   97943 pod_ready.go:82] duration metric: took 401.586259ms for pod "kube-apiserver-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:07.183318   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:07.378237   97943 request.go:632] Waited for 194.824127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032
	I1210 00:09:07.378316   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032
	I1210 00:09:07.378322   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:07.378330   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:07.378333   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:07.382039   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:07.578085   97943 request.go:632] Waited for 195.402263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:07.578148   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:07.578154   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:07.578162   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:07.578166   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:07.581490   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:07.582147   97943 pod_ready.go:93] pod "kube-controller-manager-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:07.582169   97943 pod_ready.go:82] duration metric: took 398.840074ms for pod "kube-controller-manager-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:07.582184   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:07.778287   97943 request.go:632] Waited for 195.989005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032-m02
	I1210 00:09:07.778362   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032-m02
	I1210 00:09:07.778374   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:07.778386   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:07.778396   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:07.781669   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:07.978394   97943 request.go:632] Waited for 195.912192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:07.978479   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:07.978484   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:07.978492   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:07.978496   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:07.981759   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:07.982200   97943 pod_ready.go:93] pod "kube-controller-manager-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:07.982218   97943 pod_ready.go:82] duration metric: took 400.02698ms for pod "kube-controller-manager-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:07.982230   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:08.178354   97943 request.go:632] Waited for 196.04264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032-m03
	I1210 00:09:08.178439   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032-m03
	I1210 00:09:08.178449   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:08.178457   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:08.178466   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:08.181631   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:08.378597   97943 request.go:632] Waited for 196.366344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:08.378673   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:08.378683   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:08.378697   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:08.378707   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:08.384450   97943 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1210 00:09:08.385049   97943 pod_ready.go:93] pod "kube-controller-manager-ha-070032-m03" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:08.385078   97943 pod_ready.go:82] duration metric: took 402.840862ms for pod "kube-controller-manager-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:08.385096   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7fm88" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:08.577999   97943 request.go:632] Waited for 192.799851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7fm88
	I1210 00:09:08.578083   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7fm88
	I1210 00:09:08.578091   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:08.578100   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:08.578112   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:08.581292   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:08.777999   97943 request.go:632] Waited for 196.009017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:08.778080   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:08.778085   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:08.778093   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:08.778098   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:08.781007   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:08.781565   97943 pod_ready.go:93] pod "kube-proxy-7fm88" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:08.781586   97943 pod_ready.go:82] duration metric: took 396.482834ms for pod "kube-proxy-7fm88" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:08.781597   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bhnsm" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:08.978485   97943 request.go:632] Waited for 196.79193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bhnsm
	I1210 00:09:08.978550   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bhnsm
	I1210 00:09:08.978555   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:08.978577   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:08.978584   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:08.981555   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:09.178372   97943 request.go:632] Waited for 196.176512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:09.178445   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:09.178450   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:09.178457   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:09.178462   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:09.180718   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:09.181230   97943 pod_ready.go:93] pod "kube-proxy-bhnsm" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:09.181253   97943 pod_ready.go:82] duration metric: took 399.648229ms for pod "kube-proxy-bhnsm" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:09.181267   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xsxdp" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:09.378388   97943 request.go:632] Waited for 197.025674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsxdp
	I1210 00:09:09.378477   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsxdp
	I1210 00:09:09.378488   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:09.378497   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:09.378503   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:09.381425   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:09.578360   97943 request.go:632] Waited for 196.219183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:09.578421   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:09.578427   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:09.578435   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:09.578443   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:09.581280   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:09.581905   97943 pod_ready.go:93] pod "kube-proxy-xsxdp" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:09.581924   97943 pod_ready.go:82] duration metric: took 400.650321ms for pod "kube-proxy-xsxdp" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:09.581937   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:09.778061   97943 request.go:632] Waited for 196.052401ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032
	I1210 00:09:09.778128   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032
	I1210 00:09:09.778147   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:09.778155   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:09.778159   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:09.781448   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:09.978364   97943 request.go:632] Waited for 196.322768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:09.978428   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:09.978432   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:09.978441   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:09.978451   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:09.981730   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:09.982286   97943 pod_ready.go:93] pod "kube-scheduler-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:09.982308   97943 pod_ready.go:82] duration metric: took 400.362948ms for pod "kube-scheduler-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:09.982322   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:10.178076   97943 request.go:632] Waited for 195.65251ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032-m02
	I1210 00:09:10.178166   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032-m02
	I1210 00:09:10.178177   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:10.178190   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:10.178199   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:10.180876   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:10.377670   97943 request.go:632] Waited for 196.175118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:10.377736   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:10.377741   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:10.377751   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:10.377756   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:10.380801   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:10.381686   97943 pod_ready.go:93] pod "kube-scheduler-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:10.381707   97943 pod_ready.go:82] duration metric: took 399.375185ms for pod "kube-scheduler-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:10.381723   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:10.578151   97943 request.go:632] Waited for 196.332176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032-m03
	I1210 00:09:10.578230   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032-m03
	I1210 00:09:10.578239   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:10.578251   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:10.578259   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:10.581336   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:10.778384   97943 request.go:632] Waited for 196.388806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:10.778498   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:10.778512   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:10.778524   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:10.778534   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:10.781555   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:10.782190   97943 pod_ready.go:93] pod "kube-scheduler-ha-070032-m03" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:10.782213   97943 pod_ready.go:82] duration metric: took 400.482867ms for pod "kube-scheduler-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:10.782226   97943 pod_ready.go:39] duration metric: took 5.200841149s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:09:10.782243   97943 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:09:10.782306   97943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:09:10.798221   97943 api_server.go:72] duration metric: took 24.010410964s to wait for apiserver process to appear ...
	I1210 00:09:10.798252   97943 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:09:10.798277   97943 api_server.go:253] Checking apiserver healthz at https://192.168.39.187:8443/healthz ...
	I1210 00:09:10.802683   97943 api_server.go:279] https://192.168.39.187:8443/healthz returned 200:
	ok
	I1210 00:09:10.802763   97943 round_trippers.go:463] GET https://192.168.39.187:8443/version
	I1210 00:09:10.802775   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:10.802786   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:10.802791   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:10.803637   97943 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1210 00:09:10.803715   97943 api_server.go:141] control plane version: v1.31.2
	I1210 00:09:10.803733   97943 api_server.go:131] duration metric: took 5.473282ms to wait for apiserver health ...
	I1210 00:09:10.803747   97943 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:09:10.978074   97943 request.go:632] Waited for 174.240033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:09:10.978174   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:09:10.978188   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:10.978200   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:10.978210   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:10.984458   97943 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1210 00:09:10.990989   97943 system_pods.go:59] 24 kube-system pods found
	I1210 00:09:10.991013   97943 system_pods.go:61] "coredns-7c65d6cfc9-fs6l6" [9a1cf8b4-0a76-41a1-930f-1574e31db324] Running
	I1210 00:09:10.991018   97943 system_pods.go:61] "coredns-7c65d6cfc9-nqnhw" [2c81e85b-ea31-43b6-9467-97ff40b0b4a0] Running
	I1210 00:09:10.991022   97943 system_pods.go:61] "etcd-ha-070032" [db08368f-b4a3-4d4b-8863-3a2bef1f832b] Running
	I1210 00:09:10.991026   97943 system_pods.go:61] "etcd-ha-070032-m02" [22d5e0f8-0395-40fe-b734-1718a89c251a] Running
	I1210 00:09:10.991029   97943 system_pods.go:61] "etcd-ha-070032-m03" [ab936be4-5488-4dfc-a02a-d503eaf3ea02] Running
	I1210 00:09:10.991032   97943 system_pods.go:61] "kindnet-69btk" [23838518-3372-48bc-986e-e4688e0963bb] Running
	I1210 00:09:10.991034   97943 system_pods.go:61] "kindnet-gbrrg" [fe384e2f-f251-49d1-9b90-e73cddcd45e1] Running
	I1210 00:09:10.991037   97943 system_pods.go:61] "kindnet-r97q9" [566672d9-989b-4337-84f4-bafd5c70755f] Running
	I1210 00:09:10.991041   97943 system_pods.go:61] "kube-apiserver-ha-070032" [e06e8916-31d6-4690-bc97-ed55126af827] Running
	I1210 00:09:10.991044   97943 system_pods.go:61] "kube-apiserver-ha-070032-m02" [b2c543a7-2d54-44a4-9369-3115629d79bb] Running
	I1210 00:09:10.991047   97943 system_pods.go:61] "kube-apiserver-ha-070032-m03" [7d78ed28-bd45-49a7-bdd8-85d011048605] Running
	I1210 00:09:10.991050   97943 system_pods.go:61] "kube-controller-manager-ha-070032" [d01a145f-816f-41ce-83db-c81713564109] Running
	I1210 00:09:10.991054   97943 system_pods.go:61] "kube-controller-manager-ha-070032-m02" [d8af77ba-8496-4383-a8a8-387110a2a026] Running
	I1210 00:09:10.991057   97943 system_pods.go:61] "kube-controller-manager-ha-070032-m03" [f9860096-95b3-4911-b95f-22a2080afd02] Running
	I1210 00:09:10.991060   97943 system_pods.go:61] "kube-proxy-7fm88" [e935cde6-5a4b-4387-93a9-26ca701c54ac] Running
	I1210 00:09:10.991064   97943 system_pods.go:61] "kube-proxy-bhnsm" [b886bbdb-e0b7-4cb8-8e71-4b9d23993178] Running
	I1210 00:09:10.991068   97943 system_pods.go:61] "kube-proxy-xsxdp" [9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5] Running
	I1210 00:09:10.991074   97943 system_pods.go:61] "kube-scheduler-ha-070032" [28986558-f062-4f8e-9fb1-413a22311d7d] Running
	I1210 00:09:10.991078   97943 system_pods.go:61] "kube-scheduler-ha-070032-m02" [0600e010-0e55-44d9-ac1f-1e62c0139dd1] Running
	I1210 00:09:10.991081   97943 system_pods.go:61] "kube-scheduler-ha-070032-m03" [3b8eede7-a587-4561-9d46-ca58b43d7ebe] Running
	I1210 00:09:10.991084   97943 system_pods.go:61] "kube-vip-ha-070032" [dcecf230-f635-42a1-8fd4-567e3409d086] Running
	I1210 00:09:10.991087   97943 system_pods.go:61] "kube-vip-ha-070032-m02" [af656c29-7151-4ef7-8fa6-91187358671e] Running
	I1210 00:09:10.991090   97943 system_pods.go:61] "kube-vip-ha-070032-m03" [db7c389f-4b41-4fee-a43d-e89ef1455a1d] Running
	I1210 00:09:10.991095   97943 system_pods.go:61] "storage-provisioner" [920be345-6e4a-425f-bcde-0e5463fd92b7] Running
	I1210 00:09:10.991101   97943 system_pods.go:74] duration metric: took 187.346055ms to wait for pod list to return data ...
	I1210 00:09:10.991110   97943 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:09:11.178582   97943 request.go:632] Waited for 187.368121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/default/serviceaccounts
	I1210 00:09:11.178661   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/default/serviceaccounts
	I1210 00:09:11.178670   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:11.178681   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:11.178692   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:11.181792   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:11.181919   97943 default_sa.go:45] found service account: "default"
	I1210 00:09:11.181932   97943 default_sa.go:55] duration metric: took 190.816109ms for default service account to be created ...
	I1210 00:09:11.181940   97943 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:09:11.378264   97943 request.go:632] Waited for 196.227358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:09:11.378336   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:09:11.378344   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:11.378355   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:11.378365   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:11.383056   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:09:11.390160   97943 system_pods.go:86] 24 kube-system pods found
	I1210 00:09:11.390190   97943 system_pods.go:89] "coredns-7c65d6cfc9-fs6l6" [9a1cf8b4-0a76-41a1-930f-1574e31db324] Running
	I1210 00:09:11.390197   97943 system_pods.go:89] "coredns-7c65d6cfc9-nqnhw" [2c81e85b-ea31-43b6-9467-97ff40b0b4a0] Running
	I1210 00:09:11.390201   97943 system_pods.go:89] "etcd-ha-070032" [db08368f-b4a3-4d4b-8863-3a2bef1f832b] Running
	I1210 00:09:11.390207   97943 system_pods.go:89] "etcd-ha-070032-m02" [22d5e0f8-0395-40fe-b734-1718a89c251a] Running
	I1210 00:09:11.390211   97943 system_pods.go:89] "etcd-ha-070032-m03" [ab936be4-5488-4dfc-a02a-d503eaf3ea02] Running
	I1210 00:09:11.390215   97943 system_pods.go:89] "kindnet-69btk" [23838518-3372-48bc-986e-e4688e0963bb] Running
	I1210 00:09:11.390219   97943 system_pods.go:89] "kindnet-gbrrg" [fe384e2f-f251-49d1-9b90-e73cddcd45e1] Running
	I1210 00:09:11.390223   97943 system_pods.go:89] "kindnet-r97q9" [566672d9-989b-4337-84f4-bafd5c70755f] Running
	I1210 00:09:11.390227   97943 system_pods.go:89] "kube-apiserver-ha-070032" [e06e8916-31d6-4690-bc97-ed55126af827] Running
	I1210 00:09:11.390231   97943 system_pods.go:89] "kube-apiserver-ha-070032-m02" [b2c543a7-2d54-44a4-9369-3115629d79bb] Running
	I1210 00:09:11.390238   97943 system_pods.go:89] "kube-apiserver-ha-070032-m03" [7d78ed28-bd45-49a7-bdd8-85d011048605] Running
	I1210 00:09:11.390243   97943 system_pods.go:89] "kube-controller-manager-ha-070032" [d01a145f-816f-41ce-83db-c81713564109] Running
	I1210 00:09:11.390247   97943 system_pods.go:89] "kube-controller-manager-ha-070032-m02" [d8af77ba-8496-4383-a8a8-387110a2a026] Running
	I1210 00:09:11.390251   97943 system_pods.go:89] "kube-controller-manager-ha-070032-m03" [f9860096-95b3-4911-b95f-22a2080afd02] Running
	I1210 00:09:11.390256   97943 system_pods.go:89] "kube-proxy-7fm88" [e935cde6-5a4b-4387-93a9-26ca701c54ac] Running
	I1210 00:09:11.390259   97943 system_pods.go:89] "kube-proxy-bhnsm" [b886bbdb-e0b7-4cb8-8e71-4b9d23993178] Running
	I1210 00:09:11.390263   97943 system_pods.go:89] "kube-proxy-xsxdp" [9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5] Running
	I1210 00:09:11.390266   97943 system_pods.go:89] "kube-scheduler-ha-070032" [28986558-f062-4f8e-9fb1-413a22311d7d] Running
	I1210 00:09:11.390273   97943 system_pods.go:89] "kube-scheduler-ha-070032-m02" [0600e010-0e55-44d9-ac1f-1e62c0139dd1] Running
	I1210 00:09:11.390276   97943 system_pods.go:89] "kube-scheduler-ha-070032-m03" [3b8eede7-a587-4561-9d46-ca58b43d7ebe] Running
	I1210 00:09:11.390280   97943 system_pods.go:89] "kube-vip-ha-070032" [dcecf230-f635-42a1-8fd4-567e3409d086] Running
	I1210 00:09:11.390284   97943 system_pods.go:89] "kube-vip-ha-070032-m02" [af656c29-7151-4ef7-8fa6-91187358671e] Running
	I1210 00:09:11.390287   97943 system_pods.go:89] "kube-vip-ha-070032-m03" [db7c389f-4b41-4fee-a43d-e89ef1455a1d] Running
	I1210 00:09:11.390290   97943 system_pods.go:89] "storage-provisioner" [920be345-6e4a-425f-bcde-0e5463fd92b7] Running
	I1210 00:09:11.390298   97943 system_pods.go:126] duration metric: took 208.352897ms to wait for k8s-apps to be running ...
	I1210 00:09:11.390309   97943 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:09:11.390362   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:09:11.405439   97943 system_svc.go:56] duration metric: took 15.123283ms WaitForService to wait for kubelet
	I1210 00:09:11.405468   97943 kubeadm.go:582] duration metric: took 24.617672778s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:09:11.405491   97943 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:09:11.577957   97943 request.go:632] Waited for 172.358102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes
	I1210 00:09:11.578045   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes
	I1210 00:09:11.578061   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:11.578081   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:11.578091   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:11.582050   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:11.583133   97943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:09:11.583157   97943 node_conditions.go:123] node cpu capacity is 2
	I1210 00:09:11.583185   97943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:09:11.583189   97943 node_conditions.go:123] node cpu capacity is 2
	I1210 00:09:11.583193   97943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:09:11.583196   97943 node_conditions.go:123] node cpu capacity is 2
	I1210 00:09:11.583201   97943 node_conditions.go:105] duration metric: took 177.705427ms to run NodePressure ...
	I1210 00:09:11.583218   97943 start.go:241] waiting for startup goroutines ...
	I1210 00:09:11.583239   97943 start.go:255] writing updated cluster config ...
	I1210 00:09:11.583593   97943 ssh_runner.go:195] Run: rm -f paused
	I1210 00:09:11.635827   97943 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:09:11.638609   97943 out.go:177] * Done! kubectl is now configured to use "ha-070032" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 00:13:00 ha-070032 crio[662]: time="2024-12-10 00:13:00.830737679Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789580830668857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79eba97a-4c76-4aea-bb1f-7ef6b8672e42 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:13:00 ha-070032 crio[662]: time="2024-12-10 00:13:00.831346527Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f345d3c-9f83-46be-86d5-773e62744533 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:13:00 ha-070032 crio[662]: time="2024-12-10 00:13:00.831407987Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f345d3c-9f83-46be-86d5-773e62744533 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:13:00 ha-070032 crio[662]: time="2024-12-10 00:13:00.831673346Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c6ab8dccd8ba997291d2b80d419b2b2e05a32373d0202dd8c719263ae30ccdb,PodSandboxId:e3f274c30a3959296a5c030d1ffa934b64c75f95bf0306039097cb7cf68b4fe4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733789355190103686,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d682h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 856c7b29-d84b-4688-8af1-c6cd60e5c948,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e305236942a6a79ef7f91e253be393cd58488d48aba3b5bf66c479acd0067bc8,PodSandboxId:5a85b4a79da52f1c29e7fcfa81fa0bedc1eb6bfd38fb444179c78f575d79d860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215377276438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqnhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e85b-ea31-43b6-9467-97ff40b0b4a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c2e334f3ec55e4be646775958650ae4186637afeca4998288d1fbd38037c8ea,PodSandboxId:f558795052a9d75e6b78b43096de277d42834212f92a6b90269650e35759d438,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215322355166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fs6l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9a1cf8b4-0a76-41a1-930f-1574e31db324,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bc6f0cc193d54e7a8a7dd22a38ca0e4d4f61bb51f322d3f41dac47db13c95b,PodSandboxId:3ad98b3ae6d227d0d652ff69ea4b3d54dd4f3e946d6480b75ce103fe7eaffa18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733789215263534966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920be345-6e4a-425f-bcde-0e5463fd92b7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c87cad753cfce5b05fd5987342e13dea86a36bc44f9c4b3a934dd48d2329af3,PodSandboxId:07cf68f38d235286c1e4b79576c7b2d5fd1fb3fd1b16b9f3f13cfe64eaed0c17,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733789203352008941,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r97q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566672d9-989b-4337-84f4-bafd5c70755f,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ce0ccc8b2285ae9861ca675ddee1c7cc4b2eb95f1fc9b3c252e8b7f70e57e2,PodSandboxId:f6e164f7d5dc23cc23d172647390aa33a976f115b0e620a3f1c8ceb1972b50c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733789199
811986108,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsxdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c832ea7354c3849fb453286649b272a5fcf355d47187efa465c13d6fc4d65dd,PodSandboxId:63415c4eed5c67ae421bad643372f730f2058ddc450abd01192ea02adf7fd1cd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378919154
1880372,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853eed8c10d2af858abe4fbec7a3d503,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ad93591d94d418f536035e0dbd58787d7e5c96ef2645619b7bf1fdd88df33c,PodSandboxId:974a006af9e0d5faa5a19f159a45e227208fee0ebf388fba700952547d1d2529,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733789188777226881,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecd4fc4633d25364aa5747e7113ed8c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cc792ca2c2098e4d3b71b355fb33ce358f1d7de4b2454c236712b6102bdaa06,PodSandboxId:94eb5ad94038f1074001a9da0f0356b4211451a99d8498e75f83f6896f01e753,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789188760652300,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2215b8ddb73d6ba5d32a52569092583d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1482c9caeda45e0518bea419e7de0b9d7ea7016563bc8769f4f33151afc52fca,PodSandboxId:2ae901f42d38831fd48a04c073ff0005559a868da016d574361a8a116dae1424,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733789188774438801,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b3080b6aff5134adaf044003845e8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06c286b00c118c3e50e8da5c902bb87ed0d31b055437ffb3e10bda025f3a64d,PodSandboxId:baf6b5fc008a937d1c54f73e03b660e8eddaec5406534b7f3a1ae0650baf4121,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733789188707109904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727141bf4ae4b442d005a5b0b8b6fdb9,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1f345d3c-9f83-46be-86d5-773e62744533 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:13:00 ha-070032 crio[662]: time="2024-12-10 00:13:00.865316992Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4657e735-9cb2-4782-8a8d-9b89e4e1096b name=/runtime.v1.RuntimeService/Version
	Dec 10 00:13:00 ha-070032 crio[662]: time="2024-12-10 00:13:00.865374203Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4657e735-9cb2-4782-8a8d-9b89e4e1096b name=/runtime.v1.RuntimeService/Version
	Dec 10 00:13:00 ha-070032 crio[662]: time="2024-12-10 00:13:00.866615313Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6048c2bd-5953-4c7c-bccf-7338ee2dc0c4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:13:00 ha-070032 crio[662]: time="2024-12-10 00:13:00.867102902Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789580867081625,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6048c2bd-5953-4c7c-bccf-7338ee2dc0c4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:13:00 ha-070032 crio[662]: time="2024-12-10 00:13:00.867577978Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abc5b63d-accb-456f-aca7-08f314f93762 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:13:00 ha-070032 crio[662]: time="2024-12-10 00:13:00.867625039Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abc5b63d-accb-456f-aca7-08f314f93762 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:13:00 ha-070032 crio[662]: time="2024-12-10 00:13:00.867888264Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c6ab8dccd8ba997291d2b80d419b2b2e05a32373d0202dd8c719263ae30ccdb,PodSandboxId:e3f274c30a3959296a5c030d1ffa934b64c75f95bf0306039097cb7cf68b4fe4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733789355190103686,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d682h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 856c7b29-d84b-4688-8af1-c6cd60e5c948,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e305236942a6a79ef7f91e253be393cd58488d48aba3b5bf66c479acd0067bc8,PodSandboxId:5a85b4a79da52f1c29e7fcfa81fa0bedc1eb6bfd38fb444179c78f575d79d860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215377276438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqnhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e85b-ea31-43b6-9467-97ff40b0b4a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c2e334f3ec55e4be646775958650ae4186637afeca4998288d1fbd38037c8ea,PodSandboxId:f558795052a9d75e6b78b43096de277d42834212f92a6b90269650e35759d438,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215322355166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fs6l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9a1cf8b4-0a76-41a1-930f-1574e31db324,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bc6f0cc193d54e7a8a7dd22a38ca0e4d4f61bb51f322d3f41dac47db13c95b,PodSandboxId:3ad98b3ae6d227d0d652ff69ea4b3d54dd4f3e946d6480b75ce103fe7eaffa18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733789215263534966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920be345-6e4a-425f-bcde-0e5463fd92b7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c87cad753cfce5b05fd5987342e13dea86a36bc44f9c4b3a934dd48d2329af3,PodSandboxId:07cf68f38d235286c1e4b79576c7b2d5fd1fb3fd1b16b9f3f13cfe64eaed0c17,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733789203352008941,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r97q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566672d9-989b-4337-84f4-bafd5c70755f,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ce0ccc8b2285ae9861ca675ddee1c7cc4b2eb95f1fc9b3c252e8b7f70e57e2,PodSandboxId:f6e164f7d5dc23cc23d172647390aa33a976f115b0e620a3f1c8ceb1972b50c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733789199
811986108,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsxdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c832ea7354c3849fb453286649b272a5fcf355d47187efa465c13d6fc4d65dd,PodSandboxId:63415c4eed5c67ae421bad643372f730f2058ddc450abd01192ea02adf7fd1cd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378919154
1880372,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853eed8c10d2af858abe4fbec7a3d503,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ad93591d94d418f536035e0dbd58787d7e5c96ef2645619b7bf1fdd88df33c,PodSandboxId:974a006af9e0d5faa5a19f159a45e227208fee0ebf388fba700952547d1d2529,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733789188777226881,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecd4fc4633d25364aa5747e7113ed8c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cc792ca2c2098e4d3b71b355fb33ce358f1d7de4b2454c236712b6102bdaa06,PodSandboxId:94eb5ad94038f1074001a9da0f0356b4211451a99d8498e75f83f6896f01e753,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789188760652300,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2215b8ddb73d6ba5d32a52569092583d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1482c9caeda45e0518bea419e7de0b9d7ea7016563bc8769f4f33151afc52fca,PodSandboxId:2ae901f42d38831fd48a04c073ff0005559a868da016d574361a8a116dae1424,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733789188774438801,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b3080b6aff5134adaf044003845e8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06c286b00c118c3e50e8da5c902bb87ed0d31b055437ffb3e10bda025f3a64d,PodSandboxId:baf6b5fc008a937d1c54f73e03b660e8eddaec5406534b7f3a1ae0650baf4121,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733789188707109904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727141bf4ae4b442d005a5b0b8b6fdb9,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=abc5b63d-accb-456f-aca7-08f314f93762 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:13:00 ha-070032 crio[662]: time="2024-12-10 00:13:00.898686258Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f1f4dd2c-6fb8-4833-882f-81d993804a89 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:13:00 ha-070032 crio[662]: time="2024-12-10 00:13:00.898831626Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f1f4dd2c-6fb8-4833-882f-81d993804a89 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:13:00 ha-070032 crio[662]: time="2024-12-10 00:13:00.899978687Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=397a5166-dd57-4fca-bf4b-5d4e6acd9ca9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:13:00 ha-070032 crio[662]: time="2024-12-10 00:13:00.900444349Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789580900422733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=397a5166-dd57-4fca-bf4b-5d4e6acd9ca9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:13:00 ha-070032 crio[662]: time="2024-12-10 00:13:00.900975869Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8fc3d011-ab6c-4070-b5de-9ac344d307db name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:13:00 ha-070032 crio[662]: time="2024-12-10 00:13:00.901037314Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8fc3d011-ab6c-4070-b5de-9ac344d307db name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:13:00 ha-070032 crio[662]: time="2024-12-10 00:13:00.901262597Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c6ab8dccd8ba997291d2b80d419b2b2e05a32373d0202dd8c719263ae30ccdb,PodSandboxId:e3f274c30a3959296a5c030d1ffa934b64c75f95bf0306039097cb7cf68b4fe4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733789355190103686,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d682h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 856c7b29-d84b-4688-8af1-c6cd60e5c948,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e305236942a6a79ef7f91e253be393cd58488d48aba3b5bf66c479acd0067bc8,PodSandboxId:5a85b4a79da52f1c29e7fcfa81fa0bedc1eb6bfd38fb444179c78f575d79d860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215377276438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqnhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e85b-ea31-43b6-9467-97ff40b0b4a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c2e334f3ec55e4be646775958650ae4186637afeca4998288d1fbd38037c8ea,PodSandboxId:f558795052a9d75e6b78b43096de277d42834212f92a6b90269650e35759d438,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215322355166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fs6l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9a1cf8b4-0a76-41a1-930f-1574e31db324,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bc6f0cc193d54e7a8a7dd22a38ca0e4d4f61bb51f322d3f41dac47db13c95b,PodSandboxId:3ad98b3ae6d227d0d652ff69ea4b3d54dd4f3e946d6480b75ce103fe7eaffa18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733789215263534966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920be345-6e4a-425f-bcde-0e5463fd92b7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c87cad753cfce5b05fd5987342e13dea86a36bc44f9c4b3a934dd48d2329af3,PodSandboxId:07cf68f38d235286c1e4b79576c7b2d5fd1fb3fd1b16b9f3f13cfe64eaed0c17,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733789203352008941,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r97q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566672d9-989b-4337-84f4-bafd5c70755f,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ce0ccc8b2285ae9861ca675ddee1c7cc4b2eb95f1fc9b3c252e8b7f70e57e2,PodSandboxId:f6e164f7d5dc23cc23d172647390aa33a976f115b0e620a3f1c8ceb1972b50c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733789199
811986108,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsxdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c832ea7354c3849fb453286649b272a5fcf355d47187efa465c13d6fc4d65dd,PodSandboxId:63415c4eed5c67ae421bad643372f730f2058ddc450abd01192ea02adf7fd1cd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378919154
1880372,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853eed8c10d2af858abe4fbec7a3d503,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ad93591d94d418f536035e0dbd58787d7e5c96ef2645619b7bf1fdd88df33c,PodSandboxId:974a006af9e0d5faa5a19f159a45e227208fee0ebf388fba700952547d1d2529,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733789188777226881,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecd4fc4633d25364aa5747e7113ed8c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cc792ca2c2098e4d3b71b355fb33ce358f1d7de4b2454c236712b6102bdaa06,PodSandboxId:94eb5ad94038f1074001a9da0f0356b4211451a99d8498e75f83f6896f01e753,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789188760652300,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2215b8ddb73d6ba5d32a52569092583d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1482c9caeda45e0518bea419e7de0b9d7ea7016563bc8769f4f33151afc52fca,PodSandboxId:2ae901f42d38831fd48a04c073ff0005559a868da016d574361a8a116dae1424,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733789188774438801,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b3080b6aff5134adaf044003845e8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06c286b00c118c3e50e8da5c902bb87ed0d31b055437ffb3e10bda025f3a64d,PodSandboxId:baf6b5fc008a937d1c54f73e03b660e8eddaec5406534b7f3a1ae0650baf4121,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733789188707109904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727141bf4ae4b442d005a5b0b8b6fdb9,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8fc3d011-ab6c-4070-b5de-9ac344d307db name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:13:00 ha-070032 crio[662]: time="2024-12-10 00:13:00.933493087Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2684216-97bf-402c-a654-3487701e17c4 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:13:00 ha-070032 crio[662]: time="2024-12-10 00:13:00.933568630Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2684216-97bf-402c-a654-3487701e17c4 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:13:00 ha-070032 crio[662]: time="2024-12-10 00:13:00.934353896Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b509c0b9-8a79-4026-9585-509b9ad2e5aa name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:13:00 ha-070032 crio[662]: time="2024-12-10 00:13:00.934840345Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789580934820854,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b509c0b9-8a79-4026-9585-509b9ad2e5aa name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:13:00 ha-070032 crio[662]: time="2024-12-10 00:13:00.935192249Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=32895244-3d58-4f30-b8f0-c3a376ecade9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:13:00 ha-070032 crio[662]: time="2024-12-10 00:13:00.935239658Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=32895244-3d58-4f30-b8f0-c3a376ecade9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:13:00 ha-070032 crio[662]: time="2024-12-10 00:13:00.935480100Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c6ab8dccd8ba997291d2b80d419b2b2e05a32373d0202dd8c719263ae30ccdb,PodSandboxId:e3f274c30a3959296a5c030d1ffa934b64c75f95bf0306039097cb7cf68b4fe4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733789355190103686,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d682h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 856c7b29-d84b-4688-8af1-c6cd60e5c948,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e305236942a6a79ef7f91e253be393cd58488d48aba3b5bf66c479acd0067bc8,PodSandboxId:5a85b4a79da52f1c29e7fcfa81fa0bedc1eb6bfd38fb444179c78f575d79d860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215377276438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqnhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e85b-ea31-43b6-9467-97ff40b0b4a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c2e334f3ec55e4be646775958650ae4186637afeca4998288d1fbd38037c8ea,PodSandboxId:f558795052a9d75e6b78b43096de277d42834212f92a6b90269650e35759d438,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215322355166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fs6l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9a1cf8b4-0a76-41a1-930f-1574e31db324,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bc6f0cc193d54e7a8a7dd22a38ca0e4d4f61bb51f322d3f41dac47db13c95b,PodSandboxId:3ad98b3ae6d227d0d652ff69ea4b3d54dd4f3e946d6480b75ce103fe7eaffa18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733789215263534966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920be345-6e4a-425f-bcde-0e5463fd92b7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c87cad753cfce5b05fd5987342e13dea86a36bc44f9c4b3a934dd48d2329af3,PodSandboxId:07cf68f38d235286c1e4b79576c7b2d5fd1fb3fd1b16b9f3f13cfe64eaed0c17,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733789203352008941,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r97q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566672d9-989b-4337-84f4-bafd5c70755f,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ce0ccc8b2285ae9861ca675ddee1c7cc4b2eb95f1fc9b3c252e8b7f70e57e2,PodSandboxId:f6e164f7d5dc23cc23d172647390aa33a976f115b0e620a3f1c8ceb1972b50c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733789199
811986108,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsxdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c832ea7354c3849fb453286649b272a5fcf355d47187efa465c13d6fc4d65dd,PodSandboxId:63415c4eed5c67ae421bad643372f730f2058ddc450abd01192ea02adf7fd1cd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378919154
1880372,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853eed8c10d2af858abe4fbec7a3d503,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ad93591d94d418f536035e0dbd58787d7e5c96ef2645619b7bf1fdd88df33c,PodSandboxId:974a006af9e0d5faa5a19f159a45e227208fee0ebf388fba700952547d1d2529,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733789188777226881,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecd4fc4633d25364aa5747e7113ed8c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cc792ca2c2098e4d3b71b355fb33ce358f1d7de4b2454c236712b6102bdaa06,PodSandboxId:94eb5ad94038f1074001a9da0f0356b4211451a99d8498e75f83f6896f01e753,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789188760652300,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2215b8ddb73d6ba5d32a52569092583d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1482c9caeda45e0518bea419e7de0b9d7ea7016563bc8769f4f33151afc52fca,PodSandboxId:2ae901f42d38831fd48a04c073ff0005559a868da016d574361a8a116dae1424,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733789188774438801,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b3080b6aff5134adaf044003845e8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06c286b00c118c3e50e8da5c902bb87ed0d31b055437ffb3e10bda025f3a64d,PodSandboxId:baf6b5fc008a937d1c54f73e03b660e8eddaec5406534b7f3a1ae0650baf4121,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733789188707109904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727141bf4ae4b442d005a5b0b8b6fdb9,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=32895244-3d58-4f30-b8f0-c3a376ecade9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1c6ab8dccd8ba       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   e3f274c30a395       busybox-7dff88458-d682h
	e305236942a6a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   5a85b4a79da52       coredns-7c65d6cfc9-nqnhw
	7c2e334f3ec55       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   f558795052a9d       coredns-7c65d6cfc9-fs6l6
	a0bc6f0cc193d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   3ad98b3ae6d22       storage-provisioner
	4c87cad753cfc       docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3    6 minutes ago       Running             kindnet-cni               0                   07cf68f38d235       kindnet-r97q9
	d7ce0ccc8b228       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   f6e164f7d5dc2       kube-proxy-xsxdp
	2c832ea7354c3       ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517     6 minutes ago       Running             kube-vip                  0                   63415c4eed5c6       kube-vip-ha-070032
	a1ad93591d94d       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   974a006af9e0d       kube-apiserver-ha-070032
	1482c9caeda45       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   2ae901f42d388       kube-scheduler-ha-070032
	3cc792ca2c209       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   94eb5ad94038f       etcd-ha-070032
	d06c286b00c11       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   baf6b5fc008a9       kube-controller-manager-ha-070032
	
	
	==> coredns [7c2e334f3ec55e4be646775958650ae4186637afeca4998288d1fbd38037c8ea] <==
	[INFO] 10.244.3.2:46682 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001449431s
	[INFO] 10.244.1.2:58178 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186321s
	[INFO] 10.244.1.2:50380 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000193258s
	[INFO] 10.244.1.2:46652 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001618s
	[INFO] 10.244.1.2:57883 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003426883s
	[INFO] 10.244.0.4:59352 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009624s
	[INFO] 10.244.0.4:54543 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069497s
	[INFO] 10.244.0.4:53696 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011622s
	[INFO] 10.244.0.4:55436 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112389s
	[INFO] 10.244.3.2:43114 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001706864s
	[INFO] 10.244.3.2:56624 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088751s
	[INFO] 10.244.3.2:44513 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000074851s
	[INFO] 10.244.3.2:49956 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081755s
	[INFO] 10.244.1.2:40349 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000153721s
	[INFO] 10.244.0.4:44925 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128981s
	[INFO] 10.244.0.4:36252 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088006s
	[INFO] 10.244.0.4:39383 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070489s
	[INFO] 10.244.0.4:51627 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125996s
	[INFO] 10.244.3.2:46896 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118479s
	[INFO] 10.244.1.2:38261 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013128s
	[INFO] 10.244.1.2:58062 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000196774s
	[INFO] 10.244.0.4:47202 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140777s
	[INFO] 10.244.0.4:55399 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000091936s
	[INFO] 10.244.3.2:58172 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126998s
	[INFO] 10.244.3.2:58403 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000107335s
	
	
	==> coredns [e305236942a6a79ef7f91e253be393cd58488d48aba3b5bf66c479acd0067bc8] <==
	[INFO] 10.244.3.2:39118 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.049213372s
	[INFO] 10.244.1.2:47189 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002650171s
	[INFO] 10.244.1.2:60873 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149978s
	[INFO] 10.244.1.2:48109 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137629s
	[INFO] 10.244.1.2:49474 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113792s
	[INFO] 10.244.0.4:41643 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001681013s
	[INFO] 10.244.0.4:48048 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011923s
	[INFO] 10.244.0.4:35726 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000999387s
	[INFO] 10.244.0.4:41981 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00003888s
	[INFO] 10.244.3.2:42883 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156584s
	[INFO] 10.244.3.2:47597 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174459s
	[INFO] 10.244.3.2:52426 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001324612s
	[INFO] 10.244.3.2:51253 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071403s
	[INFO] 10.244.1.2:50492 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118518s
	[INFO] 10.244.1.2:49203 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108258s
	[INFO] 10.244.1.2:51348 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096375s
	[INFO] 10.244.3.2:42362 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000236533s
	[INFO] 10.244.3.2:60373 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010669s
	[INFO] 10.244.3.2:54648 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107013s
	[INFO] 10.244.1.2:49645 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000168571s
	[INFO] 10.244.1.2:37889 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000146602s
	[INFO] 10.244.0.4:44430 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098202s
	[INFO] 10.244.0.4:40310 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093003s
	[INFO] 10.244.3.2:55334 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000110256s
	[INFO] 10.244.3.2:41666 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108876s
	
	
	==> describe nodes <==
	Name:               ha-070032
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-070032
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=ha-070032
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_10T00_06_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:06:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-070032
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:12:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 00:09:38 +0000   Tue, 10 Dec 2024 00:06:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 00:09:38 +0000   Tue, 10 Dec 2024 00:06:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 00:09:38 +0000   Tue, 10 Dec 2024 00:06:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 00:09:38 +0000   Tue, 10 Dec 2024 00:06:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.187
	  Hostname:    ha-070032
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fb099128ff44c2a9726305ea6a63c95
	  System UUID:                8fb09912-8ff4-4c2a-9726-305ea6a63c95
	  Boot ID:                    72ec90c5-f76d-4c2b-9a52-435cb90236ed
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-d682h              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 coredns-7c65d6cfc9-fs6l6             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m22s
	  kube-system                 coredns-7c65d6cfc9-nqnhw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m22s
	  kube-system                 etcd-ha-070032                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m27s
	  kube-system                 kindnet-r97q9                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m22s
	  kube-system                 kube-apiserver-ha-070032             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-controller-manager-ha-070032    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-proxy-xsxdp                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-scheduler-ha-070032             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-vip-ha-070032                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m29s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m21s  kube-proxy       
	  Normal  Starting                 6m27s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m27s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m27s  kubelet          Node ha-070032 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m27s  kubelet          Node ha-070032 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m27s  kubelet          Node ha-070032 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m23s  node-controller  Node ha-070032 event: Registered Node ha-070032 in Controller
	  Normal  NodeReady                6m7s   kubelet          Node ha-070032 status is now: NodeReady
	  Normal  RegisteredNode           5m23s  node-controller  Node ha-070032 event: Registered Node ha-070032 in Controller
	  Normal  RegisteredNode           4m9s   node-controller  Node ha-070032 event: Registered Node ha-070032 in Controller
	
	
	Name:               ha-070032-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-070032-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=ha-070032
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_10T00_07_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:07:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-070032-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:10:24 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 10 Dec 2024 00:09:33 +0000   Tue, 10 Dec 2024 00:11:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 10 Dec 2024 00:09:33 +0000   Tue, 10 Dec 2024 00:11:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 10 Dec 2024 00:09:33 +0000   Tue, 10 Dec 2024 00:11:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 10 Dec 2024 00:09:33 +0000   Tue, 10 Dec 2024 00:11:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    ha-070032-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2c2b302d819044f8ad0494a0ee312d67
	  System UUID:                2c2b302d-8190-44f8-ad04-94a0ee312d67
	  Boot ID:                    b80c4e1c-4168-43bd-ac70-470e7e9703f3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7gbz8                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 etcd-ha-070032-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m29s
	  kube-system                 kindnet-69btk                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m31s
	  kube-system                 kube-apiserver-ha-070032-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-controller-manager-ha-070032-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-proxy-7fm88                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-scheduler-ha-070032-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-vip-ha-070032-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m26s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     5m31s                  cidrAllocator    Node ha-070032-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  5m31s (x8 over 5m31s)  kubelet          Node ha-070032-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m31s (x8 over 5m31s)  kubelet          Node ha-070032-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m31s (x7 over 5m31s)  kubelet          Node ha-070032-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m28s                  node-controller  Node ha-070032-m02 event: Registered Node ha-070032-m02 in Controller
	  Normal  RegisteredNode           5m23s                  node-controller  Node ha-070032-m02 event: Registered Node ha-070032-m02 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-070032-m02 event: Registered Node ha-070032-m02 in Controller
	  Normal  NodeNotReady             114s                   node-controller  Node ha-070032-m02 status is now: NodeNotReady
	
	
	Name:               ha-070032-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-070032-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=ha-070032
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_10T00_08_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:08:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-070032-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:12:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 00:09:45 +0000   Tue, 10 Dec 2024 00:08:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 00:09:45 +0000   Tue, 10 Dec 2024 00:08:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 00:09:45 +0000   Tue, 10 Dec 2024 00:08:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 00:09:45 +0000   Tue, 10 Dec 2024 00:09:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.244
	  Hostname:    ha-070032-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7af7f783967c41bab4027928f3eb1ce2
	  System UUID:                7af7f783-967c-41ba-b402-7928f3eb1ce2
	  Boot ID:                    d7bca268-a1b9-47e2-900d-e8e3d560bcf4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-pw24w                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 etcd-ha-070032-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m16s
	  kube-system                 kindnet-gbrrg                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m18s
	  kube-system                 kube-apiserver-ha-070032-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-ha-070032-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-bhnsm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-scheduler-ha-070032-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-vip-ha-070032-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m13s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     4m18s                  cidrAllocator    Node ha-070032-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m18s (x8 over 4m18s)  kubelet          Node ha-070032-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m18s (x8 over 4m18s)  kubelet          Node ha-070032-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s (x7 over 4m18s)  kubelet          Node ha-070032-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-070032-m03 event: Registered Node ha-070032-m03 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-070032-m03 event: Registered Node ha-070032-m03 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-070032-m03 event: Registered Node ha-070032-m03 in Controller
	
	
	Name:               ha-070032-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-070032-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=ha-070032
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_10T00_09_50_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:09:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-070032-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:12:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 00:10:20 +0000   Tue, 10 Dec 2024 00:09:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 00:10:20 +0000   Tue, 10 Dec 2024 00:09:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 00:10:20 +0000   Tue, 10 Dec 2024 00:09:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 00:10:20 +0000   Tue, 10 Dec 2024 00:10:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.178
	  Hostname:    ha-070032-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1722ee99e8fc4ae7bbf0809a3824e471
	  System UUID:                1722ee99-e8fc-4ae7-bbf0-809a3824e471
	  Boot ID:                    4df30219-5a9e-41b4-adfb-6890ccd87aac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-knnxw       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m10s
	  kube-system                 kube-proxy-k8xs7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m6s                   kube-proxy       
	  Normal  CIDRAssignmentFailed     3m12s                  cidrAllocator    Node ha-070032-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m12s (x2 over 3m12s)  kubelet          Node ha-070032-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m12s (x2 over 3m12s)  kubelet          Node ha-070032-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m12s (x2 over 3m12s)  kubelet          Node ha-070032-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m9s                   node-controller  Node ha-070032-m04 event: Registered Node ha-070032-m04 in Controller
	  Normal  RegisteredNode           3m8s                   node-controller  Node ha-070032-m04 event: Registered Node ha-070032-m04 in Controller
	  Normal  RegisteredNode           3m8s                   node-controller  Node ha-070032-m04 event: Registered Node ha-070032-m04 in Controller
	  Normal  NodeReady                2m52s                  kubelet          Node ha-070032-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec10 00:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052638] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037715] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Dec10 00:06] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.906851] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.611346] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.711169] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.053296] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050206] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.175256] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.129791] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.262857] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +3.716566] systemd-fstab-generator[748]: Ignoring "noauto" option for root device
	[  +4.745437] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.059727] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.033385] systemd-fstab-generator[1302]: Ignoring "noauto" option for root device
	[  +0.073983] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.636013] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.381804] kauditd_printk_skb: 38 callbacks suppressed
	[Dec10 00:07] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [3cc792ca2c2098e4d3b71b355fb33ce358f1d7de4b2454c236712b6102bdaa06] <==
	{"level":"warn","ts":"2024-12-10T00:13:01.116115Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"de7cb460fd4f55eb","rtt":"801.242µs","error":"dial tcp 192.168.39.198:2380: i/o timeout"}
	{"level":"warn","ts":"2024-12-10T00:13:01.163892Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:01.179533Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:01.185401Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:01.189325Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:01.199595Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:01.205594Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:01.211950Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:01.212678Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:01.215299Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:01.217987Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:01.222904Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:01.228996Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:01.235301Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:01.238543Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:01.241330Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:01.246399Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:01.251975Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:01.257243Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:01.259929Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:01.262629Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:01.265942Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:01.270984Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:01.276786Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:01.313068Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:13:01 up 7 min,  0 users,  load average: 0.21, 0.29, 0.15
	Linux ha-070032 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4c87cad753cfce5b05fd5987342e13dea86a36bc44f9c4b3a934dd48d2329af3] <==
	I1210 00:12:24.367477       1 main.go:324] Node ha-070032-m04 has CIDR [10.244.4.0/24] 
	I1210 00:12:34.364895       1 main.go:297] Handling node with IPs: map[192.168.39.178:{}]
	I1210 00:12:34.364970       1 main.go:324] Node ha-070032-m04 has CIDR [10.244.4.0/24] 
	I1210 00:12:34.365169       1 main.go:297] Handling node with IPs: map[192.168.39.187:{}]
	I1210 00:12:34.365177       1 main.go:301] handling current node
	I1210 00:12:34.365200       1 main.go:297] Handling node with IPs: map[192.168.39.198:{}]
	I1210 00:12:34.365204       1 main.go:324] Node ha-070032-m02 has CIDR [10.244.1.0/24] 
	I1210 00:12:34.365319       1 main.go:297] Handling node with IPs: map[192.168.39.244:{}]
	I1210 00:12:34.365324       1 main.go:324] Node ha-070032-m03 has CIDR [10.244.3.0/24] 
	I1210 00:12:44.361278       1 main.go:297] Handling node with IPs: map[192.168.39.187:{}]
	I1210 00:12:44.361407       1 main.go:301] handling current node
	I1210 00:12:44.361435       1 main.go:297] Handling node with IPs: map[192.168.39.198:{}]
	I1210 00:12:44.361453       1 main.go:324] Node ha-070032-m02 has CIDR [10.244.1.0/24] 
	I1210 00:12:44.361686       1 main.go:297] Handling node with IPs: map[192.168.39.244:{}]
	I1210 00:12:44.361767       1 main.go:324] Node ha-070032-m03 has CIDR [10.244.3.0/24] 
	I1210 00:12:44.361952       1 main.go:297] Handling node with IPs: map[192.168.39.178:{}]
	I1210 00:12:44.361977       1 main.go:324] Node ha-070032-m04 has CIDR [10.244.4.0/24] 
	I1210 00:12:54.368862       1 main.go:297] Handling node with IPs: map[192.168.39.187:{}]
	I1210 00:12:54.368987       1 main.go:301] handling current node
	I1210 00:12:54.369042       1 main.go:297] Handling node with IPs: map[192.168.39.198:{}]
	I1210 00:12:54.369048       1 main.go:324] Node ha-070032-m02 has CIDR [10.244.1.0/24] 
	I1210 00:12:54.369300       1 main.go:297] Handling node with IPs: map[192.168.39.244:{}]
	I1210 00:12:54.369307       1 main.go:324] Node ha-070032-m03 has CIDR [10.244.3.0/24] 
	I1210 00:12:54.369408       1 main.go:297] Handling node with IPs: map[192.168.39.178:{}]
	I1210 00:12:54.369414       1 main.go:324] Node ha-070032-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [a1ad93591d94d418f536035e0dbd58787d7e5c96ef2645619b7bf1fdd88df33c] <==
	W1210 00:06:33.327544       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.187]
	I1210 00:06:33.328436       1 controller.go:615] quota admission added evaluator for: endpoints
	I1210 00:06:33.332351       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 00:06:33.644177       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1210 00:06:34.401030       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1210 00:06:34.426254       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1210 00:06:34.437836       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1210 00:06:39.341658       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1210 00:06:39.388665       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1210 00:09:16.643347       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53112: use of closed network connection
	E1210 00:09:16.826908       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53130: use of closed network connection
	E1210 00:09:17.054445       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53146: use of closed network connection
	E1210 00:09:17.230406       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53174: use of closed network connection
	E1210 00:09:17.395919       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53190: use of closed network connection
	E1210 00:09:17.578908       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53210: use of closed network connection
	E1210 00:09:17.752762       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53234: use of closed network connection
	E1210 00:09:17.924915       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53246: use of closed network connection
	E1210 00:09:18.096320       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53250: use of closed network connection
	E1210 00:09:18.374453       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53288: use of closed network connection
	E1210 00:09:18.551219       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53308: use of closed network connection
	E1210 00:09:18.715487       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53328: use of closed network connection
	E1210 00:09:18.882307       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53350: use of closed network connection
	E1210 00:09:19.053232       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53360: use of closed network connection
	E1210 00:09:19.219127       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53388: use of closed network connection
	W1210 00:10:43.338652       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.187 192.168.39.244]
	
	
	==> kube-controller-manager [d06c286b00c118c3e50e8da5c902bb87ed0d31b055437ffb3e10bda025f3a64d] <==
	I1210 00:09:49.805217       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-070032-m04" podCIDRs=["10.244.4.0/24"]
	I1210 00:09:49.805335       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:49.805501       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:49.830568       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:50.055099       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:50.429393       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:52.233446       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:53.527465       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:53.529595       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-070032-m04"
	I1210 00:09:53.635341       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:53.748163       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:53.769858       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:10:00.115956       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:10:09.020321       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:10:09.021003       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-070032-m04"
	I1210 00:10:09.036523       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:10:12.188838       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:10:20.604295       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:11:07.214303       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-070032-m04"
	I1210 00:11:07.214659       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m02"
	I1210 00:11:07.239149       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m02"
	I1210 00:11:07.332434       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.113905ms"
	I1210 00:11:07.332808       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="177.2µs"
	I1210 00:11:08.619804       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m02"
	I1210 00:11:12.462357       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m02"
	
	
	==> kube-proxy [d7ce0ccc8b2285ae9861ca675ddee1c7cc4b2eb95f1fc9b3c252e8b7f70e57e2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1210 00:06:40.034153       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1210 00:06:40.050742       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.187"]
	E1210 00:06:40.050886       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 00:06:40.097328       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1210 00:06:40.097397       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 00:06:40.097429       1 server_linux.go:169] "Using iptables Proxier"
	I1210 00:06:40.099955       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 00:06:40.100221       1 server.go:483] "Version info" version="v1.31.2"
	I1210 00:06:40.100242       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 00:06:40.102079       1 config.go:199] "Starting service config controller"
	I1210 00:06:40.102108       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1210 00:06:40.102130       1 config.go:105] "Starting endpoint slice config controller"
	I1210 00:06:40.102134       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1210 00:06:40.103442       1 config.go:328] "Starting node config controller"
	I1210 00:06:40.103468       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1210 00:06:40.203097       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1210 00:06:40.203185       1 shared_informer.go:320] Caches are synced for service config
	I1210 00:06:40.203635       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1482c9caeda45e0518bea419e7de0b9d7ea7016563bc8769f4f33151afc52fca] <==
	W1210 00:06:32.612869       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1210 00:06:32.612911       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1210 00:06:32.694127       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1210 00:06:32.694210       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 00:06:32.728214       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1210 00:06:32.728261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1210 00:06:32.890681       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1210 00:06:32.890785       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:06:32.906571       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1210 00:06:32.906947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:06:33.046474       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1210 00:06:33.046616       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1210 00:06:36.200867       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1210 00:09:49.873453       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-r2tf6\": pod kube-proxy-r2tf6 is already assigned to node \"ha-070032-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-r2tf6" node="ha-070032-m04"
	E1210 00:09:49.876571       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-r2tf6\": pod kube-proxy-r2tf6 is already assigned to node \"ha-070032-m04\"" pod="kube-system/kube-proxy-r2tf6"
	I1210 00:09:49.878867       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-r2tf6" node="ha-070032-m04"
	E1210 00:09:49.879144       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-v5wzl\": pod kindnet-v5wzl is already assigned to node \"ha-070032-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-v5wzl" node="ha-070032-m04"
	E1210 00:09:49.879364       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-v5wzl\": pod kindnet-v5wzl is already assigned to node \"ha-070032-m04\"" pod="kube-system/kindnet-v5wzl"
	I1210 00:09:49.879740       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-v5wzl" node="ha-070032-m04"
	E1210 00:09:49.938476       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-j8rtf\": pod kindnet-j8rtf is already assigned to node \"ha-070032-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-j8rtf" node="ha-070032-m04"
	E1210 00:09:49.939506       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-j8rtf\": pod kindnet-j8rtf is already assigned to node \"ha-070032-m04\"" pod="kube-system/kindnet-j8rtf"
	E1210 00:09:51.707755       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-nqxxb\": pod kindnet-nqxxb is already assigned to node \"ha-070032-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-nqxxb" node="ha-070032-m04"
	E1210 00:09:51.707858       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f925375b-3698-422b-a607-5a92ae55da32(kube-system/kindnet-nqxxb) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-nqxxb"
	E1210 00:09:51.707911       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-nqxxb\": pod kindnet-nqxxb is already assigned to node \"ha-070032-m04\"" pod="kube-system/kindnet-nqxxb"
	I1210 00:09:51.707964       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-nqxxb" node="ha-070032-m04"
	
	
	==> kubelet <==
	Dec 10 00:11:34 ha-070032 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 10 00:11:34 ha-070032 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 10 00:11:34 ha-070032 kubelet[1308]: E1210 00:11:34.426250    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789494424141935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:34 ha-070032 kubelet[1308]: E1210 00:11:34.426301    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789494424141935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:44 ha-070032 kubelet[1308]: E1210 00:11:44.428969    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789504427653710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:44 ha-070032 kubelet[1308]: E1210 00:11:44.429023    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789504427653710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:54 ha-070032 kubelet[1308]: E1210 00:11:54.430352    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789514430120521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:54 ha-070032 kubelet[1308]: E1210 00:11:54.430374    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789514430120521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:04 ha-070032 kubelet[1308]: E1210 00:12:04.432645    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789524431673132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:04 ha-070032 kubelet[1308]: E1210 00:12:04.432732    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789524431673132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:14 ha-070032 kubelet[1308]: E1210 00:12:14.434466    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789534434193110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:14 ha-070032 kubelet[1308]: E1210 00:12:14.434800    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789534434193110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:24 ha-070032 kubelet[1308]: E1210 00:12:24.436591    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789544436265231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:24 ha-070032 kubelet[1308]: E1210 00:12:24.436615    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789544436265231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:34 ha-070032 kubelet[1308]: E1210 00:12:34.323013    1308 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 10 00:12:34 ha-070032 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 10 00:12:34 ha-070032 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 10 00:12:34 ha-070032 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 10 00:12:34 ha-070032 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 10 00:12:34 ha-070032 kubelet[1308]: E1210 00:12:34.438072    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789554437642598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:34 ha-070032 kubelet[1308]: E1210 00:12:34.438102    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789554437642598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:44 ha-070032 kubelet[1308]: E1210 00:12:44.439455    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789564439127012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:44 ha-070032 kubelet[1308]: E1210 00:12:44.439836    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789564439127012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:54 ha-070032 kubelet[1308]: E1210 00:12:54.441399    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789574440681046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:54 ha-070032 kubelet[1308]: E1210 00:12:54.441436    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789574440681046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-070032 -n ha-070032
helpers_test.go:261: (dbg) Run:  kubectl --context ha-070032 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.307914309s)
ha_test.go:309: expected profile "ha-070032" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-070032\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-070032\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-070032\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.187\",\"Port\":8443,\"Kuberne
tesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.198\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.244\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.178\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt
\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",
\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-070032 -n ha-070032
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-070032 logs -n 25: (1.28008222s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m03:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032:/home/docker/cp-test_ha-070032-m03_ha-070032.txt                       |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032 sudo cat                                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m03_ha-070032.txt                                 |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m03:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m02:/home/docker/cp-test_ha-070032-m03_ha-070032-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032-m02 sudo cat                                          | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m03_ha-070032-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m03:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04:/home/docker/cp-test_ha-070032-m03_ha-070032-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032-m04 sudo cat                                          | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m03_ha-070032-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-070032 cp testdata/cp-test.txt                                                | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3617736836/001/cp-test_ha-070032-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032:/home/docker/cp-test_ha-070032-m04_ha-070032.txt                       |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032 sudo cat                                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m04_ha-070032.txt                                 |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m02:/home/docker/cp-test_ha-070032-m04_ha-070032-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032-m02 sudo cat                                          | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m04_ha-070032-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m03:/home/docker/cp-test_ha-070032-m04_ha-070032-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032-m03 sudo cat                                          | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m04_ha-070032-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-070032 node stop m02 -v=7                                                     | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-070032 node start m02 -v=7                                                    | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/10 00:05:52
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 00:05:52.791526   97943 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:05:52.791657   97943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:05:52.791669   97943 out.go:358] Setting ErrFile to fd 2...
	I1210 00:05:52.791677   97943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:05:52.791857   97943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 00:05:52.792405   97943 out.go:352] Setting JSON to false
	I1210 00:05:52.793229   97943 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6504,"bootTime":1733782649,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:05:52.793329   97943 start.go:139] virtualization: kvm guest
	I1210 00:05:52.796124   97943 out.go:177] * [ha-070032] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 00:05:52.797192   97943 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 00:05:52.797225   97943 notify.go:220] Checking for updates...
	I1210 00:05:52.799407   97943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:05:52.800504   97943 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:05:52.801675   97943 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:05:52.802744   97943 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:05:52.803783   97943 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:05:52.805109   97943 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:05:52.839813   97943 out.go:177] * Using the kvm2 driver based on user configuration
	I1210 00:05:52.840958   97943 start.go:297] selected driver: kvm2
	I1210 00:05:52.841009   97943 start.go:901] validating driver "kvm2" against <nil>
	I1210 00:05:52.841037   97943 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:05:52.841764   97943 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:05:52.841862   97943 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 00:05:52.856053   97943 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1210 00:05:52.856105   97943 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1210 00:05:52.856343   97943 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:05:52.856388   97943 cni.go:84] Creating CNI manager for ""
	I1210 00:05:52.856439   97943 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1210 00:05:52.856451   97943 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1210 00:05:52.856513   97943 start.go:340] cluster config:
	{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1210 00:05:52.856629   97943 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:05:52.858290   97943 out.go:177] * Starting "ha-070032" primary control-plane node in "ha-070032" cluster
	I1210 00:05:52.859441   97943 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:05:52.859486   97943 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1210 00:05:52.859496   97943 cache.go:56] Caching tarball of preloaded images
	I1210 00:05:52.859571   97943 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 00:05:52.859584   97943 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 00:05:52.859883   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:05:52.859904   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json: {Name:mke01e2b75d6b946a14cfa49d40b8237b928645a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:05:52.860050   97943 start.go:360] acquireMachinesLock for ha-070032: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:05:52.860091   97943 start.go:364] duration metric: took 24.816µs to acquireMachinesLock for "ha-070032"
	I1210 00:05:52.860115   97943 start.go:93] Provisioning new machine with config: &{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:05:52.860185   97943 start.go:125] createHost starting for "" (driver="kvm2")
	I1210 00:05:52.862431   97943 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1210 00:05:52.862625   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:05:52.862674   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:52.876494   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44133
	I1210 00:05:52.876866   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:52.877406   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:05:52.877428   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:52.877772   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:52.877940   97943 main.go:141] libmachine: (ha-070032) Calling .GetMachineName
	I1210 00:05:52.878106   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:05:52.878243   97943 start.go:159] libmachine.API.Create for "ha-070032" (driver="kvm2")
	I1210 00:05:52.878282   97943 client.go:168] LocalClient.Create starting
	I1210 00:05:52.878351   97943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem
	I1210 00:05:52.878400   97943 main.go:141] libmachine: Decoding PEM data...
	I1210 00:05:52.878419   97943 main.go:141] libmachine: Parsing certificate...
	I1210 00:05:52.878472   97943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem
	I1210 00:05:52.878494   97943 main.go:141] libmachine: Decoding PEM data...
	I1210 00:05:52.878509   97943 main.go:141] libmachine: Parsing certificate...
	I1210 00:05:52.878535   97943 main.go:141] libmachine: Running pre-create checks...
	I1210 00:05:52.878545   97943 main.go:141] libmachine: (ha-070032) Calling .PreCreateCheck
	I1210 00:05:52.878920   97943 main.go:141] libmachine: (ha-070032) Calling .GetConfigRaw
	I1210 00:05:52.879333   97943 main.go:141] libmachine: Creating machine...
	I1210 00:05:52.879348   97943 main.go:141] libmachine: (ha-070032) Calling .Create
	I1210 00:05:52.879474   97943 main.go:141] libmachine: (ha-070032) Creating KVM machine...
	I1210 00:05:52.880541   97943 main.go:141] libmachine: (ha-070032) DBG | found existing default KVM network
	I1210 00:05:52.881177   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:52.881049   97966 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a30}
	I1210 00:05:52.881198   97943 main.go:141] libmachine: (ha-070032) DBG | created network xml: 
	I1210 00:05:52.881212   97943 main.go:141] libmachine: (ha-070032) DBG | <network>
	I1210 00:05:52.881222   97943 main.go:141] libmachine: (ha-070032) DBG |   <name>mk-ha-070032</name>
	I1210 00:05:52.881231   97943 main.go:141] libmachine: (ha-070032) DBG |   <dns enable='no'/>
	I1210 00:05:52.881237   97943 main.go:141] libmachine: (ha-070032) DBG |   
	I1210 00:05:52.881250   97943 main.go:141] libmachine: (ha-070032) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1210 00:05:52.881265   97943 main.go:141] libmachine: (ha-070032) DBG |     <dhcp>
	I1210 00:05:52.881279   97943 main.go:141] libmachine: (ha-070032) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1210 00:05:52.881290   97943 main.go:141] libmachine: (ha-070032) DBG |     </dhcp>
	I1210 00:05:52.881301   97943 main.go:141] libmachine: (ha-070032) DBG |   </ip>
	I1210 00:05:52.881310   97943 main.go:141] libmachine: (ha-070032) DBG |   
	I1210 00:05:52.881318   97943 main.go:141] libmachine: (ha-070032) DBG | </network>
	I1210 00:05:52.881328   97943 main.go:141] libmachine: (ha-070032) DBG | 
	I1210 00:05:52.886258   97943 main.go:141] libmachine: (ha-070032) DBG | trying to create private KVM network mk-ha-070032 192.168.39.0/24...
	I1210 00:05:52.950347   97943 main.go:141] libmachine: (ha-070032) Setting up store path in /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032 ...
	I1210 00:05:52.950384   97943 main.go:141] libmachine: (ha-070032) DBG | private KVM network mk-ha-070032 192.168.39.0/24 created
	I1210 00:05:52.950396   97943 main.go:141] libmachine: (ha-070032) Building disk image from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1210 00:05:52.950439   97943 main.go:141] libmachine: (ha-070032) Downloading /home/jenkins/minikube-integration/20062-79135/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1210 00:05:52.950463   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:52.950265   97966 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:05:53.225909   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:53.225784   97966 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa...
	I1210 00:05:53.325235   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:53.325112   97966 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/ha-070032.rawdisk...
	I1210 00:05:53.325266   97943 main.go:141] libmachine: (ha-070032) DBG | Writing magic tar header
	I1210 00:05:53.325288   97943 main.go:141] libmachine: (ha-070032) DBG | Writing SSH key tar header
	I1210 00:05:53.325300   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:53.325244   97966 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032 ...
	I1210 00:05:53.325369   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032
	I1210 00:05:53.325394   97943 main.go:141] libmachine: (ha-070032) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032 (perms=drwx------)
	I1210 00:05:53.325428   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines
	I1210 00:05:53.325447   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:05:53.325560   97943 main.go:141] libmachine: (ha-070032) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines (perms=drwxr-xr-x)
	I1210 00:05:53.325599   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135
	I1210 00:05:53.325634   97943 main.go:141] libmachine: (ha-070032) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube (perms=drwxr-xr-x)
	I1210 00:05:53.325659   97943 main.go:141] libmachine: (ha-070032) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135 (perms=drwxrwxr-x)
	I1210 00:05:53.325669   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1210 00:05:53.325681   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home/jenkins
	I1210 00:05:53.325695   97943 main.go:141] libmachine: (ha-070032) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 00:05:53.325703   97943 main.go:141] libmachine: (ha-070032) DBG | Checking permissions on dir: /home
	I1210 00:05:53.325715   97943 main.go:141] libmachine: (ha-070032) DBG | Skipping /home - not owner
	I1210 00:05:53.325747   97943 main.go:141] libmachine: (ha-070032) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 00:05:53.325763   97943 main.go:141] libmachine: (ha-070032) Creating domain...
	I1210 00:05:53.326682   97943 main.go:141] libmachine: (ha-070032) define libvirt domain using xml: 
	I1210 00:05:53.326699   97943 main.go:141] libmachine: (ha-070032) <domain type='kvm'>
	I1210 00:05:53.326705   97943 main.go:141] libmachine: (ha-070032)   <name>ha-070032</name>
	I1210 00:05:53.326709   97943 main.go:141] libmachine: (ha-070032)   <memory unit='MiB'>2200</memory>
	I1210 00:05:53.326714   97943 main.go:141] libmachine: (ha-070032)   <vcpu>2</vcpu>
	I1210 00:05:53.326718   97943 main.go:141] libmachine: (ha-070032)   <features>
	I1210 00:05:53.326744   97943 main.go:141] libmachine: (ha-070032)     <acpi/>
	I1210 00:05:53.326772   97943 main.go:141] libmachine: (ha-070032)     <apic/>
	I1210 00:05:53.326783   97943 main.go:141] libmachine: (ha-070032)     <pae/>
	I1210 00:05:53.326806   97943 main.go:141] libmachine: (ha-070032)     
	I1210 00:05:53.326826   97943 main.go:141] libmachine: (ha-070032)   </features>
	I1210 00:05:53.326854   97943 main.go:141] libmachine: (ha-070032)   <cpu mode='host-passthrough'>
	I1210 00:05:53.326865   97943 main.go:141] libmachine: (ha-070032)   
	I1210 00:05:53.326872   97943 main.go:141] libmachine: (ha-070032)   </cpu>
	I1210 00:05:53.326882   97943 main.go:141] libmachine: (ha-070032)   <os>
	I1210 00:05:53.326889   97943 main.go:141] libmachine: (ha-070032)     <type>hvm</type>
	I1210 00:05:53.326900   97943 main.go:141] libmachine: (ha-070032)     <boot dev='cdrom'/>
	I1210 00:05:53.326906   97943 main.go:141] libmachine: (ha-070032)     <boot dev='hd'/>
	I1210 00:05:53.326920   97943 main.go:141] libmachine: (ha-070032)     <bootmenu enable='no'/>
	I1210 00:05:53.326944   97943 main.go:141] libmachine: (ha-070032)   </os>
	I1210 00:05:53.326956   97943 main.go:141] libmachine: (ha-070032)   <devices>
	I1210 00:05:53.326966   97943 main.go:141] libmachine: (ha-070032)     <disk type='file' device='cdrom'>
	I1210 00:05:53.326982   97943 main.go:141] libmachine: (ha-070032)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/boot2docker.iso'/>
	I1210 00:05:53.326995   97943 main.go:141] libmachine: (ha-070032)       <target dev='hdc' bus='scsi'/>
	I1210 00:05:53.327012   97943 main.go:141] libmachine: (ha-070032)       <readonly/>
	I1210 00:05:53.327027   97943 main.go:141] libmachine: (ha-070032)     </disk>
	I1210 00:05:53.327039   97943 main.go:141] libmachine: (ha-070032)     <disk type='file' device='disk'>
	I1210 00:05:53.327051   97943 main.go:141] libmachine: (ha-070032)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1210 00:05:53.327066   97943 main.go:141] libmachine: (ha-070032)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/ha-070032.rawdisk'/>
	I1210 00:05:53.327074   97943 main.go:141] libmachine: (ha-070032)       <target dev='hda' bus='virtio'/>
	I1210 00:05:53.327080   97943 main.go:141] libmachine: (ha-070032)     </disk>
	I1210 00:05:53.327086   97943 main.go:141] libmachine: (ha-070032)     <interface type='network'>
	I1210 00:05:53.327091   97943 main.go:141] libmachine: (ha-070032)       <source network='mk-ha-070032'/>
	I1210 00:05:53.327096   97943 main.go:141] libmachine: (ha-070032)       <model type='virtio'/>
	I1210 00:05:53.327101   97943 main.go:141] libmachine: (ha-070032)     </interface>
	I1210 00:05:53.327107   97943 main.go:141] libmachine: (ha-070032)     <interface type='network'>
	I1210 00:05:53.327127   97943 main.go:141] libmachine: (ha-070032)       <source network='default'/>
	I1210 00:05:53.327131   97943 main.go:141] libmachine: (ha-070032)       <model type='virtio'/>
	I1210 00:05:53.327138   97943 main.go:141] libmachine: (ha-070032)     </interface>
	I1210 00:05:53.327142   97943 main.go:141] libmachine: (ha-070032)     <serial type='pty'>
	I1210 00:05:53.327147   97943 main.go:141] libmachine: (ha-070032)       <target port='0'/>
	I1210 00:05:53.327152   97943 main.go:141] libmachine: (ha-070032)     </serial>
	I1210 00:05:53.327157   97943 main.go:141] libmachine: (ha-070032)     <console type='pty'>
	I1210 00:05:53.327167   97943 main.go:141] libmachine: (ha-070032)       <target type='serial' port='0'/>
	I1210 00:05:53.327176   97943 main.go:141] libmachine: (ha-070032)     </console>
	I1210 00:05:53.327183   97943 main.go:141] libmachine: (ha-070032)     <rng model='virtio'>
	I1210 00:05:53.327188   97943 main.go:141] libmachine: (ha-070032)       <backend model='random'>/dev/random</backend>
	I1210 00:05:53.327201   97943 main.go:141] libmachine: (ha-070032)     </rng>
	I1210 00:05:53.327208   97943 main.go:141] libmachine: (ha-070032)     
	I1210 00:05:53.327212   97943 main.go:141] libmachine: (ha-070032)     
	I1210 00:05:53.327219   97943 main.go:141] libmachine: (ha-070032)   </devices>
	I1210 00:05:53.327223   97943 main.go:141] libmachine: (ha-070032) </domain>
	I1210 00:05:53.327229   97943 main.go:141] libmachine: (ha-070032) 
	I1210 00:05:53.331717   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:3e:64:27 in network default
	I1210 00:05:53.332300   97943 main.go:141] libmachine: (ha-070032) Ensuring networks are active...
	I1210 00:05:53.332321   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:53.332935   97943 main.go:141] libmachine: (ha-070032) Ensuring network default is active
	I1210 00:05:53.333268   97943 main.go:141] libmachine: (ha-070032) Ensuring network mk-ha-070032 is active
	I1210 00:05:53.333775   97943 main.go:141] libmachine: (ha-070032) Getting domain xml...
	I1210 00:05:53.334418   97943 main.go:141] libmachine: (ha-070032) Creating domain...
	I1210 00:05:54.486671   97943 main.go:141] libmachine: (ha-070032) Waiting to get IP...
	I1210 00:05:54.487631   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:54.488004   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:54.488023   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:54.487962   97966 retry.go:31] will retry after 250.94638ms: waiting for machine to come up
	I1210 00:05:54.740488   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:54.740898   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:54.740922   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:54.740853   97966 retry.go:31] will retry after 369.652496ms: waiting for machine to come up
	I1210 00:05:55.112670   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:55.113058   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:55.113088   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:55.113006   97966 retry.go:31] will retry after 419.563235ms: waiting for machine to come up
	I1210 00:05:55.534593   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:55.535015   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:55.535042   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:55.534960   97966 retry.go:31] will retry after 426.548067ms: waiting for machine to come up
	I1210 00:05:55.963569   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:55.963962   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:55.963978   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:55.963937   97966 retry.go:31] will retry after 617.965427ms: waiting for machine to come up
	I1210 00:05:56.583725   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:56.584072   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:56.584105   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:56.584063   97966 retry.go:31] will retry after 856.526353ms: waiting for machine to come up
	I1210 00:05:57.442311   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:57.442739   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:57.442796   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:57.442703   97966 retry.go:31] will retry after 1.178569719s: waiting for machine to come up
	I1210 00:05:58.622338   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:05:58.622797   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:05:58.622827   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:05:58.622728   97966 retry.go:31] will retry after 1.42624777s: waiting for machine to come up
	I1210 00:06:00.051240   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:00.051614   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:06:00.051640   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:06:00.051572   97966 retry.go:31] will retry after 1.801666778s: waiting for machine to come up
	I1210 00:06:01.855728   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:01.856159   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:06:01.856181   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:06:01.856123   97966 retry.go:31] will retry after 2.078837624s: waiting for machine to come up
	I1210 00:06:03.936907   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:03.937387   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:06:03.937421   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:06:03.937345   97966 retry.go:31] will retry after 2.395168214s: waiting for machine to come up
	I1210 00:06:06.336012   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:06.336380   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:06:06.336409   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:06:06.336336   97966 retry.go:31] will retry after 2.386978523s: waiting for machine to come up
	I1210 00:06:08.725386   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:08.725781   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find current IP address of domain ha-070032 in network mk-ha-070032
	I1210 00:06:08.725809   97943 main.go:141] libmachine: (ha-070032) DBG | I1210 00:06:08.725749   97966 retry.go:31] will retry after 4.346211813s: waiting for machine to come up
	I1210 00:06:13.073905   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:13.074439   97943 main.go:141] libmachine: (ha-070032) Found IP for machine: 192.168.39.187
	I1210 00:06:13.074469   97943 main.go:141] libmachine: (ha-070032) Reserving static IP address...
	I1210 00:06:13.074487   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has current primary IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:13.075078   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find host DHCP lease matching {name: "ha-070032", mac: "52:54:00:ad:ce:dc", ip: "192.168.39.187"} in network mk-ha-070032
	I1210 00:06:13.145743   97943 main.go:141] libmachine: (ha-070032) DBG | Getting to WaitForSSH function...
	I1210 00:06:13.145776   97943 main.go:141] libmachine: (ha-070032) Reserved static IP address: 192.168.39.187
	I1210 00:06:13.145818   97943 main.go:141] libmachine: (ha-070032) Waiting for SSH to be available...
	I1210 00:06:13.148440   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:13.148825   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032
	I1210 00:06:13.148851   97943 main.go:141] libmachine: (ha-070032) DBG | unable to find defined IP address of network mk-ha-070032 interface with MAC address 52:54:00:ad:ce:dc
	I1210 00:06:13.149012   97943 main.go:141] libmachine: (ha-070032) DBG | Using SSH client type: external
	I1210 00:06:13.149039   97943 main.go:141] libmachine: (ha-070032) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa (-rw-------)
	I1210 00:06:13.149072   97943 main.go:141] libmachine: (ha-070032) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:06:13.149085   97943 main.go:141] libmachine: (ha-070032) DBG | About to run SSH command:
	I1210 00:06:13.149097   97943 main.go:141] libmachine: (ha-070032) DBG | exit 0
	I1210 00:06:13.152933   97943 main.go:141] libmachine: (ha-070032) DBG | SSH cmd err, output: exit status 255: 
	I1210 00:06:13.152951   97943 main.go:141] libmachine: (ha-070032) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1210 00:06:13.152957   97943 main.go:141] libmachine: (ha-070032) DBG | command : exit 0
	I1210 00:06:13.152962   97943 main.go:141] libmachine: (ha-070032) DBG | err     : exit status 255
	I1210 00:06:13.152969   97943 main.go:141] libmachine: (ha-070032) DBG | output  : 
	I1210 00:06:16.155027   97943 main.go:141] libmachine: (ha-070032) DBG | Getting to WaitForSSH function...
	I1210 00:06:16.157296   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.157685   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.157714   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.157840   97943 main.go:141] libmachine: (ha-070032) DBG | Using SSH client type: external
	I1210 00:06:16.157860   97943 main.go:141] libmachine: (ha-070032) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa (-rw-------)
	I1210 00:06:16.157887   97943 main.go:141] libmachine: (ha-070032) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.187 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:06:16.157900   97943 main.go:141] libmachine: (ha-070032) DBG | About to run SSH command:
	I1210 00:06:16.157909   97943 main.go:141] libmachine: (ha-070032) DBG | exit 0
	I1210 00:06:16.278179   97943 main.go:141] libmachine: (ha-070032) DBG | SSH cmd err, output: <nil>: 
	I1210 00:06:16.278456   97943 main.go:141] libmachine: (ha-070032) KVM machine creation complete!
	I1210 00:06:16.278762   97943 main.go:141] libmachine: (ha-070032) Calling .GetConfigRaw
	I1210 00:06:16.279308   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:16.279502   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:16.279643   97943 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1210 00:06:16.279659   97943 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:06:16.280933   97943 main.go:141] libmachine: Detecting operating system of created instance...
	I1210 00:06:16.280956   97943 main.go:141] libmachine: Waiting for SSH to be available...
	I1210 00:06:16.280962   97943 main.go:141] libmachine: Getting to WaitForSSH function...
	I1210 00:06:16.280968   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:16.283215   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.283661   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.283689   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.283820   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:16.283997   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.284144   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.284266   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:16.284430   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:06:16.284659   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:06:16.284672   97943 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1210 00:06:16.381723   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:06:16.381748   97943 main.go:141] libmachine: Detecting the provisioner...
	I1210 00:06:16.381756   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:16.384507   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.384824   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.384850   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.384978   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:16.385166   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.385349   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.385493   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:16.385645   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:06:16.385854   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:06:16.385866   97943 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1210 00:06:16.482791   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1210 00:06:16.482875   97943 main.go:141] libmachine: found compatible host: buildroot
	I1210 00:06:16.482890   97943 main.go:141] libmachine: Provisioning with buildroot...
	I1210 00:06:16.482898   97943 main.go:141] libmachine: (ha-070032) Calling .GetMachineName
	I1210 00:06:16.483155   97943 buildroot.go:166] provisioning hostname "ha-070032"
	I1210 00:06:16.483181   97943 main.go:141] libmachine: (ha-070032) Calling .GetMachineName
	I1210 00:06:16.483360   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:16.485848   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.486193   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.486234   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.486327   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:16.486524   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.486696   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.486841   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:16.486993   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:06:16.487168   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:06:16.487182   97943 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-070032 && echo "ha-070032" | sudo tee /etc/hostname
	I1210 00:06:16.599563   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-070032
	
	I1210 00:06:16.599595   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:16.602261   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.602629   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.602659   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.602789   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:16.603020   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.603241   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.603430   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:16.603599   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:06:16.603761   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:06:16.603781   97943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-070032' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-070032/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-070032' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 00:06:16.710380   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:06:16.710422   97943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 00:06:16.710472   97943 buildroot.go:174] setting up certificates
	I1210 00:06:16.710489   97943 provision.go:84] configureAuth start
	I1210 00:06:16.710503   97943 main.go:141] libmachine: (ha-070032) Calling .GetMachineName
	I1210 00:06:16.710783   97943 main.go:141] libmachine: (ha-070032) Calling .GetIP
	I1210 00:06:16.713296   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.713682   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.713712   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.713807   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:16.716284   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.716639   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.716657   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.716807   97943 provision.go:143] copyHostCerts
	I1210 00:06:16.716848   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:06:16.716882   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 00:06:16.716898   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:06:16.716962   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 00:06:16.717048   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:06:16.717075   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 00:06:16.717082   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:06:16.717107   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 00:06:16.717158   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:06:16.717175   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 00:06:16.717181   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:06:16.717202   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 00:06:16.717253   97943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.ha-070032 san=[127.0.0.1 192.168.39.187 ha-070032 localhost minikube]
	I1210 00:06:16.857455   97943 provision.go:177] copyRemoteCerts
	I1210 00:06:16.857514   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 00:06:16.857542   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:16.860287   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.860660   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:16.860687   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:16.860918   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:16.861136   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:16.861318   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:16.861436   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:06:16.940074   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 00:06:16.940147   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 00:06:16.961938   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 00:06:16.962011   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1210 00:06:16.982947   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 00:06:16.983027   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 00:06:17.003600   97943 provision.go:87] duration metric: took 293.095287ms to configureAuth
	I1210 00:06:17.003631   97943 buildroot.go:189] setting minikube options for container-runtime
	I1210 00:06:17.003823   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:06:17.003908   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:17.006244   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.006580   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.006608   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.006735   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:17.006932   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.007076   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.007191   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:17.007315   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:06:17.007484   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:06:17.007502   97943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 00:06:17.211708   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 00:06:17.211741   97943 main.go:141] libmachine: Checking connection to Docker...
	I1210 00:06:17.211753   97943 main.go:141] libmachine: (ha-070032) Calling .GetURL
	I1210 00:06:17.212951   97943 main.go:141] libmachine: (ha-070032) DBG | Using libvirt version 6000000
	I1210 00:06:17.215245   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.215611   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.215644   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.215769   97943 main.go:141] libmachine: Docker is up and running!
	I1210 00:06:17.215785   97943 main.go:141] libmachine: Reticulating splines...
	I1210 00:06:17.215796   97943 client.go:171] duration metric: took 24.337498941s to LocalClient.Create
	I1210 00:06:17.215826   97943 start.go:167] duration metric: took 24.337582238s to libmachine.API.Create "ha-070032"
	I1210 00:06:17.215839   97943 start.go:293] postStartSetup for "ha-070032" (driver="kvm2")
	I1210 00:06:17.215862   97943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 00:06:17.215886   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:17.216149   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 00:06:17.216177   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:17.218250   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.218590   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.218632   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.218752   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:17.218921   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.219062   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:17.219188   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:06:17.296211   97943 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 00:06:17.300251   97943 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 00:06:17.300276   97943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 00:06:17.300345   97943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 00:06:17.300421   97943 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 00:06:17.300431   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /etc/ssl/certs/862962.pem
	I1210 00:06:17.300529   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 00:06:17.308961   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:06:17.331496   97943 start.go:296] duration metric: took 115.636437ms for postStartSetup
	I1210 00:06:17.331591   97943 main.go:141] libmachine: (ha-070032) Calling .GetConfigRaw
	I1210 00:06:17.332201   97943 main.go:141] libmachine: (ha-070032) Calling .GetIP
	I1210 00:06:17.335151   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.335527   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.335569   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.335747   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:06:17.335921   97943 start.go:128] duration metric: took 24.475725142s to createHost
	I1210 00:06:17.335945   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:17.338044   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.338384   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.338412   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.338541   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:17.338741   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.338882   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.339001   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:17.339163   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:06:17.339337   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:06:17.339348   97943 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 00:06:17.439329   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733789177.417194070
	
	I1210 00:06:17.439361   97943 fix.go:216] guest clock: 1733789177.417194070
	I1210 00:06:17.439372   97943 fix.go:229] Guest: 2024-12-10 00:06:17.41719407 +0000 UTC Remote: 2024-12-10 00:06:17.335933593 +0000 UTC m=+24.582014233 (delta=81.260477ms)
	I1210 00:06:17.439408   97943 fix.go:200] guest clock delta is within tolerance: 81.260477ms
	I1210 00:06:17.439416   97943 start.go:83] releasing machines lock for "ha-070032", held for 24.579311872s
	I1210 00:06:17.439440   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:17.439778   97943 main.go:141] libmachine: (ha-070032) Calling .GetIP
	I1210 00:06:17.442802   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.443261   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.443289   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.443497   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:17.444002   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:17.444206   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:17.444324   97943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 00:06:17.444401   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:17.444474   97943 ssh_runner.go:195] Run: cat /version.json
	I1210 00:06:17.444500   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:17.446933   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.447294   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.447320   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.447352   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.447499   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:17.447688   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.447744   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:17.447772   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:17.447844   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:17.447953   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:17.448103   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:17.448103   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:06:17.448278   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:17.448402   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:06:17.553500   97943 ssh_runner.go:195] Run: systemctl --version
	I1210 00:06:17.559183   97943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 00:06:17.714099   97943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 00:06:17.720445   97943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 00:06:17.720522   97943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 00:06:17.735693   97943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 00:06:17.735715   97943 start.go:495] detecting cgroup driver to use...
	I1210 00:06:17.735777   97943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 00:06:17.750781   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 00:06:17.763333   97943 docker.go:217] disabling cri-docker service (if available) ...
	I1210 00:06:17.763379   97943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 00:06:17.775483   97943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 00:06:17.787288   97943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 00:06:17.890184   97943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 00:06:18.028147   97943 docker.go:233] disabling docker service ...
	I1210 00:06:18.028234   97943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 00:06:18.041611   97943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 00:06:18.054485   97943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 00:06:18.194456   97943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 00:06:18.314202   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 00:06:18.327181   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 00:06:18.343918   97943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 00:06:18.343989   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.353427   97943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 00:06:18.353489   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.362859   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.371991   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.381017   97943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 00:06:18.391381   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.401252   97943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.416290   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:06:18.426233   97943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 00:06:18.435267   97943 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 00:06:18.435316   97943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 00:06:18.447946   97943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 00:06:18.456951   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:06:18.573205   97943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 00:06:18.656643   97943 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 00:06:18.656726   97943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 00:06:18.661011   97943 start.go:563] Will wait 60s for crictl version
	I1210 00:06:18.661071   97943 ssh_runner.go:195] Run: which crictl
	I1210 00:06:18.664478   97943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:06:18.701494   97943 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 00:06:18.701578   97943 ssh_runner.go:195] Run: crio --version
	I1210 00:06:18.727238   97943 ssh_runner.go:195] Run: crio --version
	I1210 00:06:18.753327   97943 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 00:06:18.754595   97943 main.go:141] libmachine: (ha-070032) Calling .GetIP
	I1210 00:06:18.756947   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:18.757200   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:18.757235   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:18.757445   97943 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 00:06:18.760940   97943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:06:18.772727   97943 kubeadm.go:883] updating cluster {Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 00:06:18.772828   97943 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:06:18.772879   97943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:06:18.804204   97943 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 00:06:18.804265   97943 ssh_runner.go:195] Run: which lz4
	I1210 00:06:18.807579   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1210 00:06:18.807670   97943 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 00:06:18.811358   97943 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 00:06:18.811386   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1210 00:06:19.965583   97943 crio.go:462] duration metric: took 1.157944737s to copy over tarball
	I1210 00:06:19.965660   97943 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 00:06:21.934864   97943 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.969164039s)
	I1210 00:06:21.934896   97943 crio.go:469] duration metric: took 1.969285734s to extract the tarball
	I1210 00:06:21.934906   97943 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 00:06:21.970025   97943 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:06:22.022669   97943 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 00:06:22.022692   97943 cache_images.go:84] Images are preloaded, skipping loading
	I1210 00:06:22.022702   97943 kubeadm.go:934] updating node { 192.168.39.187 8443 v1.31.2 crio true true} ...
	I1210 00:06:22.022843   97943 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-070032 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:06:22.022948   97943 ssh_runner.go:195] Run: crio config
	I1210 00:06:22.066130   97943 cni.go:84] Creating CNI manager for ""
	I1210 00:06:22.066152   97943 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1210 00:06:22.066160   97943 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 00:06:22.066182   97943 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.187 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-070032 NodeName:ha-070032 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 00:06:22.066308   97943 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.187
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-070032"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.187"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.187"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 00:06:22.066339   97943 kube-vip.go:115] generating kube-vip config ...
	I1210 00:06:22.066403   97943 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1210 00:06:22.080860   97943 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1210 00:06:22.080973   97943 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1210 00:06:22.081051   97943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:06:22.089866   97943 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 00:06:22.089923   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1210 00:06:22.098290   97943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1210 00:06:22.112742   97943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:06:22.127069   97943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1210 00:06:22.141317   97943 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1210 00:06:22.155689   97943 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1210 00:06:22.159003   97943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:06:22.169321   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:06:22.288035   97943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:06:22.303534   97943 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032 for IP: 192.168.39.187
	I1210 00:06:22.303559   97943 certs.go:194] generating shared ca certs ...
	I1210 00:06:22.303580   97943 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:22.303764   97943 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 00:06:22.303807   97943 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 00:06:22.303816   97943 certs.go:256] generating profile certs ...
	I1210 00:06:22.303867   97943 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key
	I1210 00:06:22.303881   97943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.crt with IP's: []
	I1210 00:06:22.579094   97943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.crt ...
	I1210 00:06:22.579127   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.crt: {Name:mk6da1df398501169ebaa4be6e0991a8cdf439ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:22.579330   97943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key ...
	I1210 00:06:22.579344   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key: {Name:mkcfad0deb7a44a0416ffc9ec52ed32ba5314a9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:22.579449   97943 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.e24980b8
	I1210 00:06:22.579465   97943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.e24980b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.187 192.168.39.254]
	I1210 00:06:22.676685   97943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.e24980b8 ...
	I1210 00:06:22.676712   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.e24980b8: {Name:mke16dbfb98e7219f2bbc6176b557aae983cf59d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:22.676895   97943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.e24980b8 ...
	I1210 00:06:22.676911   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.e24980b8: {Name:mke38a755e8856925c614e9671ffbd341e4bacfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:22.677005   97943 certs.go:381] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.e24980b8 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt
	I1210 00:06:22.677102   97943 certs.go:385] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.e24980b8 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key
	I1210 00:06:22.677175   97943 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key
	I1210 00:06:22.677191   97943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt with IP's: []
	I1210 00:06:23.248653   97943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt ...
	I1210 00:06:23.248694   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt: {Name:mk109f5f541d0487f6eee37e10618be0687d2257 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:23.248940   97943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key ...
	I1210 00:06:23.248958   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key: {Name:mkb6a55c3dbe59a4c5c10d115460729fd5017c90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:23.249084   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 00:06:23.249122   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 00:06:23.249145   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 00:06:23.249169   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 00:06:23.249185   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 00:06:23.249208   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 00:06:23.249231   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 00:06:23.249252   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 00:06:23.249332   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 00:06:23.249393   97943 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 00:06:23.249407   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:06:23.249449   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 00:06:23.249487   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:06:23.249528   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 00:06:23.249593   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:06:23.249643   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:06:23.249668   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem -> /usr/share/ca-certificates/86296.pem
	I1210 00:06:23.249692   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /usr/share/ca-certificates/862962.pem
	I1210 00:06:23.250316   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:06:23.282882   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 00:06:23.307116   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:06:23.329842   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:06:23.350860   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 00:06:23.371360   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 00:06:23.391801   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:06:23.412467   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:06:23.433690   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:06:23.454439   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 00:06:23.475132   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 00:06:23.495728   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 00:06:23.510105   97943 ssh_runner.go:195] Run: openssl version
	I1210 00:06:23.515363   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:06:23.524990   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:06:23.528859   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:06:23.528911   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:06:23.534177   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:06:23.544011   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 00:06:23.554049   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 00:06:23.558290   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 00:06:23.558341   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 00:06:23.563770   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 00:06:23.574235   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 00:06:23.584591   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 00:06:23.588826   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 00:06:23.588880   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 00:06:23.594177   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:06:23.604355   97943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:06:23.608126   97943 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 00:06:23.608176   97943 kubeadm.go:392] StartCluster: {Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:06:23.608256   97943 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 00:06:23.608313   97943 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:06:23.644503   97943 cri.go:89] found id: ""
	I1210 00:06:23.644571   97943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 00:06:23.653924   97943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:06:23.666641   97943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:06:23.677490   97943 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:06:23.677512   97943 kubeadm.go:157] found existing configuration files:
	
	I1210 00:06:23.677553   97943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:06:23.685837   97943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:06:23.685897   97943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:06:23.696600   97943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:06:23.706796   97943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:06:23.706854   97943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:06:23.717362   97943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:06:23.727400   97943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:06:23.727453   97943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:06:23.737844   97943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:06:23.747833   97943 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:06:23.747889   97943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:06:23.758170   97943 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:06:23.860329   97943 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 00:06:23.860398   97943 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:06:23.982444   97943 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:06:23.982606   97943 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:06:23.982761   97943 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 00:06:23.992051   97943 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:06:24.260435   97943 out.go:235]   - Generating certificates and keys ...
	I1210 00:06:24.260672   97943 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:06:24.260758   97943 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:06:24.260858   97943 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 00:06:24.290159   97943 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1210 00:06:24.463743   97943 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1210 00:06:24.802277   97943 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1210 00:06:24.950429   97943 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1210 00:06:24.950692   97943 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-070032 localhost] and IPs [192.168.39.187 127.0.0.1 ::1]
	I1210 00:06:25.094704   97943 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1210 00:06:25.094857   97943 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-070032 localhost] and IPs [192.168.39.187 127.0.0.1 ::1]
	I1210 00:06:25.315955   97943 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 00:06:25.908434   97943 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 00:06:26.061724   97943 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1210 00:06:26.061977   97943 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:06:26.261701   97943 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:06:26.508681   97943 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 00:06:26.626369   97943 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:06:26.773060   97943 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:06:26.898048   97943 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:06:26.900096   97943 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:06:26.903197   97943 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:06:26.904929   97943 out.go:235]   - Booting up control plane ...
	I1210 00:06:26.905029   97943 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:06:26.905121   97943 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:06:26.905279   97943 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:06:26.919661   97943 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:06:26.926359   97943 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:06:26.926414   97943 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:06:27.050156   97943 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 00:06:27.050350   97943 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 00:06:27.551278   97943 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.620144ms
	I1210 00:06:27.551408   97943 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 00:06:33.591605   97943 kubeadm.go:310] [api-check] The API server is healthy after 6.043312277s
	I1210 00:06:33.609669   97943 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 00:06:33.625260   97943 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 00:06:33.653756   97943 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 00:06:33.653955   97943 kubeadm.go:310] [mark-control-plane] Marking the node ha-070032 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 00:06:33.666679   97943 kubeadm.go:310] [bootstrap-token] Using token: j34izu.9ybowi8hhzn9pxj2
	I1210 00:06:33.668028   97943 out.go:235]   - Configuring RBAC rules ...
	I1210 00:06:33.668176   97943 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 00:06:33.684358   97943 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 00:06:33.695755   97943 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 00:06:33.698959   97943 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 00:06:33.704573   97943 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 00:06:33.710289   97943 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 00:06:34.000325   97943 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 00:06:34.440225   97943 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 00:06:35.001489   97943 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 00:06:35.002397   97943 kubeadm.go:310] 
	I1210 00:06:35.002481   97943 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 00:06:35.002492   97943 kubeadm.go:310] 
	I1210 00:06:35.002620   97943 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 00:06:35.002641   97943 kubeadm.go:310] 
	I1210 00:06:35.002668   97943 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 00:06:35.002729   97943 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 00:06:35.002789   97943 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 00:06:35.002807   97943 kubeadm.go:310] 
	I1210 00:06:35.002880   97943 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 00:06:35.002909   97943 kubeadm.go:310] 
	I1210 00:06:35.002973   97943 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 00:06:35.002982   97943 kubeadm.go:310] 
	I1210 00:06:35.003062   97943 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 00:06:35.003170   97943 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 00:06:35.003276   97943 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 00:06:35.003287   97943 kubeadm.go:310] 
	I1210 00:06:35.003407   97943 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 00:06:35.003521   97943 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 00:06:35.003539   97943 kubeadm.go:310] 
	I1210 00:06:35.003652   97943 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token j34izu.9ybowi8hhzn9pxj2 \
	I1210 00:06:35.003744   97943 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 \
	I1210 00:06:35.003795   97943 kubeadm.go:310] 	--control-plane 
	I1210 00:06:35.003809   97943 kubeadm.go:310] 
	I1210 00:06:35.003925   97943 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 00:06:35.003934   97943 kubeadm.go:310] 
	I1210 00:06:35.004033   97943 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token j34izu.9ybowi8hhzn9pxj2 \
	I1210 00:06:35.004174   97943 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 
	I1210 00:06:35.004857   97943 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:06:35.005000   97943 cni.go:84] Creating CNI manager for ""
	I1210 00:06:35.005014   97943 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1210 00:06:35.006644   97943 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1210 00:06:35.007773   97943 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1210 00:06:35.013278   97943 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1210 00:06:35.013292   97943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1210 00:06:35.030575   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1210 00:06:35.430253   97943 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 00:06:35.430379   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-070032 minikube.k8s.io/updated_at=2024_12_10T00_06_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=ha-070032 minikube.k8s.io/primary=true
	I1210 00:06:35.430379   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:35.453581   97943 ops.go:34] apiserver oom_adj: -16
	I1210 00:06:35.589407   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:36.090147   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:36.590386   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:37.089563   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:37.589509   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:38.090045   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:38.590492   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 00:06:38.670226   97943 kubeadm.go:1113] duration metric: took 3.23992517s to wait for elevateKubeSystemPrivileges
	I1210 00:06:38.670279   97943 kubeadm.go:394] duration metric: took 15.062107151s to StartCluster
	I1210 00:06:38.670305   97943 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:38.670408   97943 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:06:38.671197   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:06:38.671402   97943 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:06:38.671412   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1210 00:06:38.671420   97943 start.go:241] waiting for startup goroutines ...
	I1210 00:06:38.671426   97943 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 00:06:38.671508   97943 addons.go:69] Setting storage-provisioner=true in profile "ha-070032"
	I1210 00:06:38.671518   97943 addons.go:69] Setting default-storageclass=true in profile "ha-070032"
	I1210 00:06:38.671525   97943 addons.go:234] Setting addon storage-provisioner=true in "ha-070032"
	I1210 00:06:38.671543   97943 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-070032"
	I1210 00:06:38.671557   97943 host.go:66] Checking if "ha-070032" exists ...
	I1210 00:06:38.671580   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:06:38.671976   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:06:38.672006   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:06:38.672032   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:38.672011   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:38.687036   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41675
	I1210 00:06:38.687249   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36941
	I1210 00:06:38.687528   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:38.687798   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:38.688109   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:06:38.688138   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:38.688273   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:06:38.688294   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:38.688523   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:38.688665   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:38.688726   97943 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:06:38.689111   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:06:38.689137   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:38.690837   97943 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:06:38.691061   97943 kapi.go:59] client config for ha-070032: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.crt", KeyFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key", CAFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 00:06:38.691470   97943 cert_rotation.go:140] Starting client certificate rotation controller
	I1210 00:06:38.691733   97943 addons.go:234] Setting addon default-storageclass=true in "ha-070032"
	I1210 00:06:38.691777   97943 host.go:66] Checking if "ha-070032" exists ...
	I1210 00:06:38.692023   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:06:38.692051   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:38.704916   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43013
	I1210 00:06:38.705299   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:38.705773   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:06:38.705793   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:38.705818   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43161
	I1210 00:06:38.706223   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:38.706266   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:38.706378   97943 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:06:38.706814   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:06:38.706838   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:38.707185   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:38.707762   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:06:38.707794   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:38.707810   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:38.709839   97943 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:06:38.711065   97943 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:06:38.711090   97943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 00:06:38.711109   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:38.713927   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:38.714361   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:38.714394   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:38.714642   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:38.714813   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:38.715016   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:38.715175   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:06:38.722431   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33307
	I1210 00:06:38.722864   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:38.723276   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:06:38.723296   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:38.723661   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:38.723828   97943 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:06:38.725166   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:06:38.725377   97943 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 00:06:38.725391   97943 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 00:06:38.725405   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:06:38.727990   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:38.728394   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:06:38.728425   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:06:38.728556   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:06:38.728718   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:06:38.728851   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:06:38.729006   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:06:38.796897   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1210 00:06:38.828298   97943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:06:38.901174   97943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 00:06:39.211073   97943 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1210 00:06:39.326332   97943 main.go:141] libmachine: Making call to close driver server
	I1210 00:06:39.326356   97943 main.go:141] libmachine: (ha-070032) Calling .Close
	I1210 00:06:39.326414   97943 main.go:141] libmachine: Making call to close driver server
	I1210 00:06:39.326438   97943 main.go:141] libmachine: (ha-070032) Calling .Close
	I1210 00:06:39.326675   97943 main.go:141] libmachine: (ha-070032) DBG | Closing plugin on server side
	I1210 00:06:39.326704   97943 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:06:39.326718   97943 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:06:39.326722   97943 main.go:141] libmachine: (ha-070032) DBG | Closing plugin on server side
	I1210 00:06:39.326732   97943 main.go:141] libmachine: Making call to close driver server
	I1210 00:06:39.326740   97943 main.go:141] libmachine: (ha-070032) Calling .Close
	I1210 00:06:39.326767   97943 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:06:39.326783   97943 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:06:39.326792   97943 main.go:141] libmachine: Making call to close driver server
	I1210 00:06:39.326799   97943 main.go:141] libmachine: (ha-070032) Calling .Close
	I1210 00:06:39.326952   97943 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:06:39.326963   97943 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:06:39.327027   97943 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:06:39.327032   97943 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 00:06:39.327042   97943 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:06:39.327048   97943 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 00:06:39.327148   97943 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1210 00:06:39.327161   97943 round_trippers.go:469] Request Headers:
	I1210 00:06:39.327179   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:06:39.327194   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:06:39.340698   97943 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1210 00:06:39.341273   97943 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1210 00:06:39.341288   97943 round_trippers.go:469] Request Headers:
	I1210 00:06:39.341295   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:06:39.341298   97943 round_trippers.go:473]     Content-Type: application/json
	I1210 00:06:39.341303   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:06:39.344902   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:06:39.345090   97943 main.go:141] libmachine: Making call to close driver server
	I1210 00:06:39.345105   97943 main.go:141] libmachine: (ha-070032) Calling .Close
	I1210 00:06:39.345391   97943 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:06:39.345413   97943 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:06:39.345420   97943 main.go:141] libmachine: (ha-070032) DBG | Closing plugin on server side
	I1210 00:06:39.347624   97943 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1210 00:06:39.348926   97943 addons.go:510] duration metric: took 677.497681ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 00:06:39.348959   97943 start.go:246] waiting for cluster config update ...
	I1210 00:06:39.348973   97943 start.go:255] writing updated cluster config ...
	I1210 00:06:39.350585   97943 out.go:201] 
	I1210 00:06:39.351879   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:06:39.351939   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:06:39.353507   97943 out.go:177] * Starting "ha-070032-m02" control-plane node in "ha-070032" cluster
	I1210 00:06:39.354653   97943 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:06:39.354670   97943 cache.go:56] Caching tarball of preloaded images
	I1210 00:06:39.354757   97943 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 00:06:39.354768   97943 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 00:06:39.354822   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:06:39.354986   97943 start.go:360] acquireMachinesLock for ha-070032-m02: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:06:39.355029   97943 start.go:364] duration metric: took 24.389µs to acquireMachinesLock for "ha-070032-m02"
	I1210 00:06:39.355043   97943 start.go:93] Provisioning new machine with config: &{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:06:39.355103   97943 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1210 00:06:39.356785   97943 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1210 00:06:39.356859   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:06:39.356884   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:06:39.373740   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41069
	I1210 00:06:39.374206   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:06:39.374743   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:06:39.374764   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:06:39.375056   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:06:39.375244   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetMachineName
	I1210 00:06:39.375358   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:06:39.375496   97943 start.go:159] libmachine.API.Create for "ha-070032" (driver="kvm2")
	I1210 00:06:39.375520   97943 client.go:168] LocalClient.Create starting
	I1210 00:06:39.375545   97943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem
	I1210 00:06:39.375577   97943 main.go:141] libmachine: Decoding PEM data...
	I1210 00:06:39.375591   97943 main.go:141] libmachine: Parsing certificate...
	I1210 00:06:39.375644   97943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem
	I1210 00:06:39.375662   97943 main.go:141] libmachine: Decoding PEM data...
	I1210 00:06:39.375672   97943 main.go:141] libmachine: Parsing certificate...
	I1210 00:06:39.375686   97943 main.go:141] libmachine: Running pre-create checks...
	I1210 00:06:39.375694   97943 main.go:141] libmachine: (ha-070032-m02) Calling .PreCreateCheck
	I1210 00:06:39.375822   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetConfigRaw
	I1210 00:06:39.376224   97943 main.go:141] libmachine: Creating machine...
	I1210 00:06:39.376240   97943 main.go:141] libmachine: (ha-070032-m02) Calling .Create
	I1210 00:06:39.376365   97943 main.go:141] libmachine: (ha-070032-m02) Creating KVM machine...
	I1210 00:06:39.377639   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found existing default KVM network
	I1210 00:06:39.377788   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found existing private KVM network mk-ha-070032
	I1210 00:06:39.377977   97943 main.go:141] libmachine: (ha-070032-m02) Setting up store path in /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02 ...
	I1210 00:06:39.378006   97943 main.go:141] libmachine: (ha-070032-m02) Building disk image from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1210 00:06:39.378048   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:39.377952   98310 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:06:39.378126   97943 main.go:141] libmachine: (ha-070032-m02) Downloading /home/jenkins/minikube-integration/20062-79135/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1210 00:06:39.655003   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:39.654863   98310 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa...
	I1210 00:06:39.917373   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:39.917261   98310 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/ha-070032-m02.rawdisk...
	I1210 00:06:39.917409   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Writing magic tar header
	I1210 00:06:39.917424   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Writing SSH key tar header
	I1210 00:06:39.917437   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:39.917371   98310 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02 ...
	I1210 00:06:39.917498   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02
	I1210 00:06:39.917529   97943 main.go:141] libmachine: (ha-070032-m02) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02 (perms=drwx------)
	I1210 00:06:39.917548   97943 main.go:141] libmachine: (ha-070032-m02) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines (perms=drwxr-xr-x)
	I1210 00:06:39.917560   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines
	I1210 00:06:39.917572   97943 main.go:141] libmachine: (ha-070032-m02) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube (perms=drwxr-xr-x)
	I1210 00:06:39.917584   97943 main.go:141] libmachine: (ha-070032-m02) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135 (perms=drwxrwxr-x)
	I1210 00:06:39.917605   97943 main.go:141] libmachine: (ha-070032-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 00:06:39.917616   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:06:39.917629   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135
	I1210 00:06:39.917642   97943 main.go:141] libmachine: (ha-070032-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 00:06:39.917652   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1210 00:06:39.917664   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home/jenkins
	I1210 00:06:39.917673   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Checking permissions on dir: /home
	I1210 00:06:39.917683   97943 main.go:141] libmachine: (ha-070032-m02) Creating domain...
	I1210 00:06:39.917707   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Skipping /home - not owner
	I1210 00:06:39.918676   97943 main.go:141] libmachine: (ha-070032-m02) define libvirt domain using xml: 
	I1210 00:06:39.918698   97943 main.go:141] libmachine: (ha-070032-m02) <domain type='kvm'>
	I1210 00:06:39.918768   97943 main.go:141] libmachine: (ha-070032-m02)   <name>ha-070032-m02</name>
	I1210 00:06:39.918816   97943 main.go:141] libmachine: (ha-070032-m02)   <memory unit='MiB'>2200</memory>
	I1210 00:06:39.918844   97943 main.go:141] libmachine: (ha-070032-m02)   <vcpu>2</vcpu>
	I1210 00:06:39.918860   97943 main.go:141] libmachine: (ha-070032-m02)   <features>
	I1210 00:06:39.918868   97943 main.go:141] libmachine: (ha-070032-m02)     <acpi/>
	I1210 00:06:39.918874   97943 main.go:141] libmachine: (ha-070032-m02)     <apic/>
	I1210 00:06:39.918881   97943 main.go:141] libmachine: (ha-070032-m02)     <pae/>
	I1210 00:06:39.918890   97943 main.go:141] libmachine: (ha-070032-m02)     
	I1210 00:06:39.918898   97943 main.go:141] libmachine: (ha-070032-m02)   </features>
	I1210 00:06:39.918908   97943 main.go:141] libmachine: (ha-070032-m02)   <cpu mode='host-passthrough'>
	I1210 00:06:39.918914   97943 main.go:141] libmachine: (ha-070032-m02)   
	I1210 00:06:39.918920   97943 main.go:141] libmachine: (ha-070032-m02)   </cpu>
	I1210 00:06:39.918932   97943 main.go:141] libmachine: (ha-070032-m02)   <os>
	I1210 00:06:39.918939   97943 main.go:141] libmachine: (ha-070032-m02)     <type>hvm</type>
	I1210 00:06:39.918951   97943 main.go:141] libmachine: (ha-070032-m02)     <boot dev='cdrom'/>
	I1210 00:06:39.918960   97943 main.go:141] libmachine: (ha-070032-m02)     <boot dev='hd'/>
	I1210 00:06:39.918969   97943 main.go:141] libmachine: (ha-070032-m02)     <bootmenu enable='no'/>
	I1210 00:06:39.918978   97943 main.go:141] libmachine: (ha-070032-m02)   </os>
	I1210 00:06:39.918985   97943 main.go:141] libmachine: (ha-070032-m02)   <devices>
	I1210 00:06:39.918996   97943 main.go:141] libmachine: (ha-070032-m02)     <disk type='file' device='cdrom'>
	I1210 00:06:39.919011   97943 main.go:141] libmachine: (ha-070032-m02)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/boot2docker.iso'/>
	I1210 00:06:39.919023   97943 main.go:141] libmachine: (ha-070032-m02)       <target dev='hdc' bus='scsi'/>
	I1210 00:06:39.919034   97943 main.go:141] libmachine: (ha-070032-m02)       <readonly/>
	I1210 00:06:39.919044   97943 main.go:141] libmachine: (ha-070032-m02)     </disk>
	I1210 00:06:39.919053   97943 main.go:141] libmachine: (ha-070032-m02)     <disk type='file' device='disk'>
	I1210 00:06:39.919066   97943 main.go:141] libmachine: (ha-070032-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1210 00:06:39.919085   97943 main.go:141] libmachine: (ha-070032-m02)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/ha-070032-m02.rawdisk'/>
	I1210 00:06:39.919096   97943 main.go:141] libmachine: (ha-070032-m02)       <target dev='hda' bus='virtio'/>
	I1210 00:06:39.919106   97943 main.go:141] libmachine: (ha-070032-m02)     </disk>
	I1210 00:06:39.919113   97943 main.go:141] libmachine: (ha-070032-m02)     <interface type='network'>
	I1210 00:06:39.919121   97943 main.go:141] libmachine: (ha-070032-m02)       <source network='mk-ha-070032'/>
	I1210 00:06:39.919132   97943 main.go:141] libmachine: (ha-070032-m02)       <model type='virtio'/>
	I1210 00:06:39.919140   97943 main.go:141] libmachine: (ha-070032-m02)     </interface>
	I1210 00:06:39.919150   97943 main.go:141] libmachine: (ha-070032-m02)     <interface type='network'>
	I1210 00:06:39.919158   97943 main.go:141] libmachine: (ha-070032-m02)       <source network='default'/>
	I1210 00:06:39.919168   97943 main.go:141] libmachine: (ha-070032-m02)       <model type='virtio'/>
	I1210 00:06:39.919177   97943 main.go:141] libmachine: (ha-070032-m02)     </interface>
	I1210 00:06:39.919187   97943 main.go:141] libmachine: (ha-070032-m02)     <serial type='pty'>
	I1210 00:06:39.919201   97943 main.go:141] libmachine: (ha-070032-m02)       <target port='0'/>
	I1210 00:06:39.919211   97943 main.go:141] libmachine: (ha-070032-m02)     </serial>
	I1210 00:06:39.919220   97943 main.go:141] libmachine: (ha-070032-m02)     <console type='pty'>
	I1210 00:06:39.919230   97943 main.go:141] libmachine: (ha-070032-m02)       <target type='serial' port='0'/>
	I1210 00:06:39.919239   97943 main.go:141] libmachine: (ha-070032-m02)     </console>
	I1210 00:06:39.919249   97943 main.go:141] libmachine: (ha-070032-m02)     <rng model='virtio'>
	I1210 00:06:39.919261   97943 main.go:141] libmachine: (ha-070032-m02)       <backend model='random'>/dev/random</backend>
	I1210 00:06:39.919271   97943 main.go:141] libmachine: (ha-070032-m02)     </rng>
	I1210 00:06:39.919278   97943 main.go:141] libmachine: (ha-070032-m02)     
	I1210 00:06:39.919287   97943 main.go:141] libmachine: (ha-070032-m02)     
	I1210 00:06:39.919296   97943 main.go:141] libmachine: (ha-070032-m02)   </devices>
	I1210 00:06:39.919305   97943 main.go:141] libmachine: (ha-070032-m02) </domain>
	I1210 00:06:39.919315   97943 main.go:141] libmachine: (ha-070032-m02) 
	I1210 00:06:39.926117   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:48:53:e3 in network default
	I1210 00:06:39.926859   97943 main.go:141] libmachine: (ha-070032-m02) Ensuring networks are active...
	I1210 00:06:39.926888   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:39.927703   97943 main.go:141] libmachine: (ha-070032-m02) Ensuring network default is active
	I1210 00:06:39.928027   97943 main.go:141] libmachine: (ha-070032-m02) Ensuring network mk-ha-070032 is active
	I1210 00:06:39.928408   97943 main.go:141] libmachine: (ha-070032-m02) Getting domain xml...
	I1210 00:06:39.929223   97943 main.go:141] libmachine: (ha-070032-m02) Creating domain...
	I1210 00:06:41.130495   97943 main.go:141] libmachine: (ha-070032-m02) Waiting to get IP...
	I1210 00:06:41.131359   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:41.131738   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:41.131767   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:41.131705   98310 retry.go:31] will retry after 310.664463ms: waiting for machine to come up
	I1210 00:06:41.444273   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:41.444703   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:41.444737   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:41.444646   98310 retry.go:31] will retry after 238.189723ms: waiting for machine to come up
	I1210 00:06:41.683967   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:41.684372   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:41.684404   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:41.684311   98310 retry.go:31] will retry after 302.841079ms: waiting for machine to come up
	I1210 00:06:41.988975   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:41.989468   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:41.989592   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:41.989406   98310 retry.go:31] will retry after 546.191287ms: waiting for machine to come up
	I1210 00:06:42.536796   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:42.537343   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:42.537376   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:42.537279   98310 retry.go:31] will retry after 759.959183ms: waiting for machine to come up
	I1210 00:06:43.299192   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:43.299592   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:43.299618   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:43.299550   98310 retry.go:31] will retry after 662.514804ms: waiting for machine to come up
	I1210 00:06:43.963192   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:43.963574   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:43.963604   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:43.963510   98310 retry.go:31] will retry after 928.068602ms: waiting for machine to come up
	I1210 00:06:44.892786   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:44.893282   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:44.893308   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:44.893234   98310 retry.go:31] will retry after 1.121647824s: waiting for machine to come up
	I1210 00:06:46.016637   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:46.017063   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:46.017120   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:46.017054   98310 retry.go:31] will retry after 1.26533881s: waiting for machine to come up
	I1210 00:06:47.283663   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:47.284077   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:47.284103   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:47.284029   98310 retry.go:31] will retry after 1.959318884s: waiting for machine to come up
	I1210 00:06:49.245134   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:49.245690   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:49.245721   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:49.245628   98310 retry.go:31] will retry after 2.080479898s: waiting for machine to come up
	I1210 00:06:51.327593   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:51.327959   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:51.327986   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:51.327912   98310 retry.go:31] will retry after 3.384865721s: waiting for machine to come up
	I1210 00:06:54.714736   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:54.715082   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:54.715116   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:54.715033   98310 retry.go:31] will retry after 4.262963095s: waiting for machine to come up
	I1210 00:06:58.982522   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:06:58.982919   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find current IP address of domain ha-070032-m02 in network mk-ha-070032
	I1210 00:06:58.982944   97943 main.go:141] libmachine: (ha-070032-m02) DBG | I1210 00:06:58.982868   98310 retry.go:31] will retry after 4.754254966s: waiting for machine to come up
	I1210 00:07:03.739570   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:03.740201   97943 main.go:141] libmachine: (ha-070032-m02) Found IP for machine: 192.168.39.198
	I1210 00:07:03.740228   97943 main.go:141] libmachine: (ha-070032-m02) Reserving static IP address...
	I1210 00:07:03.740250   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has current primary IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:03.740875   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find host DHCP lease matching {name: "ha-070032-m02", mac: "52:54:00:a4:53:39", ip: "192.168.39.198"} in network mk-ha-070032
	I1210 00:07:03.810694   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Getting to WaitForSSH function...
	I1210 00:07:03.810726   97943 main.go:141] libmachine: (ha-070032-m02) Reserved static IP address: 192.168.39.198
	I1210 00:07:03.810777   97943 main.go:141] libmachine: (ha-070032-m02) Waiting for SSH to be available...
	I1210 00:07:03.813164   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:03.813481   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032
	I1210 00:07:03.813508   97943 main.go:141] libmachine: (ha-070032-m02) DBG | unable to find defined IP address of network mk-ha-070032 interface with MAC address 52:54:00:a4:53:39
	I1210 00:07:03.813691   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Using SSH client type: external
	I1210 00:07:03.813726   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa (-rw-------)
	I1210 00:07:03.813759   97943 main.go:141] libmachine: (ha-070032-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:07:03.813774   97943 main.go:141] libmachine: (ha-070032-m02) DBG | About to run SSH command:
	I1210 00:07:03.813802   97943 main.go:141] libmachine: (ha-070032-m02) DBG | exit 0
	I1210 00:07:03.817377   97943 main.go:141] libmachine: (ha-070032-m02) DBG | SSH cmd err, output: exit status 255: 
	I1210 00:07:03.817395   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1210 00:07:03.817406   97943 main.go:141] libmachine: (ha-070032-m02) DBG | command : exit 0
	I1210 00:07:03.817413   97943 main.go:141] libmachine: (ha-070032-m02) DBG | err     : exit status 255
	I1210 00:07:03.817429   97943 main.go:141] libmachine: (ha-070032-m02) DBG | output  : 
	I1210 00:07:06.818972   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Getting to WaitForSSH function...
	I1210 00:07:06.821618   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:06.822027   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:06.822055   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:06.822215   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Using SSH client type: external
	I1210 00:07:06.822245   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa (-rw-------)
	I1210 00:07:06.822283   97943 main.go:141] libmachine: (ha-070032-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:07:06.822309   97943 main.go:141] libmachine: (ha-070032-m02) DBG | About to run SSH command:
	I1210 00:07:06.822322   97943 main.go:141] libmachine: (ha-070032-m02) DBG | exit 0
	I1210 00:07:06.950206   97943 main.go:141] libmachine: (ha-070032-m02) DBG | SSH cmd err, output: <nil>: 
	I1210 00:07:06.950523   97943 main.go:141] libmachine: (ha-070032-m02) KVM machine creation complete!
	I1210 00:07:06.950797   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetConfigRaw
	I1210 00:07:06.951365   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:06.951576   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:06.951700   97943 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1210 00:07:06.951712   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetState
	I1210 00:07:06.952852   97943 main.go:141] libmachine: Detecting operating system of created instance...
	I1210 00:07:06.952870   97943 main.go:141] libmachine: Waiting for SSH to be available...
	I1210 00:07:06.952875   97943 main.go:141] libmachine: Getting to WaitForSSH function...
	I1210 00:07:06.952881   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:06.955132   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:06.955556   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:06.955577   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:06.955708   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:06.955904   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:06.956047   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:06.956157   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:06.956344   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:07:06.956613   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1210 00:07:06.956635   97943 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1210 00:07:07.065432   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:07:07.065465   97943 main.go:141] libmachine: Detecting the provisioner...
	I1210 00:07:07.065472   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:07.068281   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.068647   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:07.068676   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.068789   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:07.069000   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:07.069205   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:07.069353   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:07.069507   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:07:07.069682   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1210 00:07:07.069696   97943 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1210 00:07:07.179172   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1210 00:07:07.179254   97943 main.go:141] libmachine: found compatible host: buildroot
	I1210 00:07:07.179270   97943 main.go:141] libmachine: Provisioning with buildroot...
	I1210 00:07:07.179281   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetMachineName
	I1210 00:07:07.179507   97943 buildroot.go:166] provisioning hostname "ha-070032-m02"
	I1210 00:07:07.179525   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetMachineName
	I1210 00:07:07.179714   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:07.182380   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.182709   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:07.182735   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.182903   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:07.183097   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:07.183236   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:07.183392   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:07.183547   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:07:07.183709   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1210 00:07:07.183720   97943 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-070032-m02 && echo "ha-070032-m02" | sudo tee /etc/hostname
	I1210 00:07:07.308107   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-070032-m02
	
	I1210 00:07:07.308157   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:07.310796   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.311128   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:07.311159   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.311367   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:07.311544   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:07.311697   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:07.311834   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:07.312007   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:07:07.312178   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1210 00:07:07.312195   97943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-070032-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-070032-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-070032-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 00:07:07.430746   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:07:07.430783   97943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 00:07:07.430808   97943 buildroot.go:174] setting up certificates
	I1210 00:07:07.430826   97943 provision.go:84] configureAuth start
	I1210 00:07:07.430840   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetMachineName
	I1210 00:07:07.431122   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetIP
	I1210 00:07:07.433939   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.434313   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:07.434337   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.434511   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:07.436908   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.437220   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:07.437245   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:07.437409   97943 provision.go:143] copyHostCerts
	I1210 00:07:07.437448   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:07:07.437491   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 00:07:07.437503   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:07:07.437576   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 00:07:07.437681   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:07:07.437707   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 00:07:07.437715   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:07:07.437755   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 00:07:07.437820   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:07:07.437852   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 00:07:07.437861   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:07:07.437895   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 00:07:07.437968   97943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.ha-070032-m02 san=[127.0.0.1 192.168.39.198 ha-070032-m02 localhost minikube]
	I1210 00:07:08.044773   97943 provision.go:177] copyRemoteCerts
	I1210 00:07:08.044851   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 00:07:08.044891   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:08.047538   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.047846   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.047877   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.048076   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:08.048336   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.048503   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:08.048649   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa Username:docker}
	I1210 00:07:08.132237   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 00:07:08.132310   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 00:07:08.154520   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 00:07:08.154605   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 00:07:08.175951   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 00:07:08.176034   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 00:07:08.197284   97943 provision.go:87] duration metric: took 766.441651ms to configureAuth
	I1210 00:07:08.197318   97943 buildroot.go:189] setting minikube options for container-runtime
	I1210 00:07:08.197534   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:07:08.197630   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:08.200256   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.200605   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.200631   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.200777   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:08.200956   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.201156   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.201290   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:08.201439   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:07:08.201609   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1210 00:07:08.201622   97943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 00:07:08.422427   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 00:07:08.422470   97943 main.go:141] libmachine: Checking connection to Docker...
	I1210 00:07:08.422479   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetURL
	I1210 00:07:08.423873   97943 main.go:141] libmachine: (ha-070032-m02) DBG | Using libvirt version 6000000
	I1210 00:07:08.426057   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.426388   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.426419   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.426586   97943 main.go:141] libmachine: Docker is up and running!
	I1210 00:07:08.426605   97943 main.go:141] libmachine: Reticulating splines...
	I1210 00:07:08.426616   97943 client.go:171] duration metric: took 29.051087497s to LocalClient.Create
	I1210 00:07:08.426651   97943 start.go:167] duration metric: took 29.051156503s to libmachine.API.Create "ha-070032"
	I1210 00:07:08.426663   97943 start.go:293] postStartSetup for "ha-070032-m02" (driver="kvm2")
	I1210 00:07:08.426676   97943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 00:07:08.426697   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:08.426973   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 00:07:08.427006   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:08.429163   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.429425   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.429445   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.429585   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:08.429771   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.429939   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:08.430073   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa Username:docker}
	I1210 00:07:08.511841   97943 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 00:07:08.515628   97943 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 00:07:08.515647   97943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 00:07:08.515716   97943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 00:07:08.515790   97943 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 00:07:08.515798   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /etc/ssl/certs/862962.pem
	I1210 00:07:08.515877   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 00:07:08.524177   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:07:08.545083   97943 start.go:296] duration metric: took 118.406585ms for postStartSetup
	I1210 00:07:08.545129   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetConfigRaw
	I1210 00:07:08.545727   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetIP
	I1210 00:07:08.548447   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.548762   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.548790   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.549019   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:07:08.549239   97943 start.go:128] duration metric: took 29.194124447s to createHost
	I1210 00:07:08.549263   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:08.551249   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.551581   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.551601   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.551788   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:08.551950   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.552104   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.552224   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:08.552368   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:07:08.552535   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.198 22 <nil> <nil>}
	I1210 00:07:08.552544   97943 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 00:07:08.658708   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733789228.640009863
	
	I1210 00:07:08.658732   97943 fix.go:216] guest clock: 1733789228.640009863
	I1210 00:07:08.658742   97943 fix.go:229] Guest: 2024-12-10 00:07:08.640009863 +0000 UTC Remote: 2024-12-10 00:07:08.549251378 +0000 UTC m=+75.795332018 (delta=90.758485ms)
	I1210 00:07:08.658764   97943 fix.go:200] guest clock delta is within tolerance: 90.758485ms
	I1210 00:07:08.658772   97943 start.go:83] releasing machines lock for "ha-070032-m02", held for 29.303735455s
	I1210 00:07:08.658798   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:08.659077   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetIP
	I1210 00:07:08.661426   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.661743   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.661779   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.663916   97943 out.go:177] * Found network options:
	I1210 00:07:08.665147   97943 out.go:177]   - NO_PROXY=192.168.39.187
	W1210 00:07:08.666190   97943 proxy.go:119] fail to check proxy env: Error ip not in block
	I1210 00:07:08.666213   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:08.666724   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:08.666867   97943 main.go:141] libmachine: (ha-070032-m02) Calling .DriverName
	I1210 00:07:08.666999   97943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 00:07:08.667045   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	W1210 00:07:08.667058   97943 proxy.go:119] fail to check proxy env: Error ip not in block
	I1210 00:07:08.667145   97943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 00:07:08.667170   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHHostname
	I1210 00:07:08.669614   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.669829   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.669978   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.670007   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.670104   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:08.670217   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:08.670241   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:08.670281   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.670437   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHPort
	I1210 00:07:08.670446   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:08.670629   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHKeyPath
	I1210 00:07:08.670648   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa Username:docker}
	I1210 00:07:08.670779   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetSSHUsername
	I1210 00:07:08.670926   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m02/id_rsa Username:docker}
	I1210 00:07:08.901492   97943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 00:07:08.907747   97943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 00:07:08.907817   97943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 00:07:08.923205   97943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 00:07:08.923229   97943 start.go:495] detecting cgroup driver to use...
	I1210 00:07:08.923295   97943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 00:07:08.937553   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 00:07:08.950281   97943 docker.go:217] disabling cri-docker service (if available) ...
	I1210 00:07:08.950346   97943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 00:07:08.962860   97943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 00:07:08.975314   97943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 00:07:09.086709   97943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 00:07:09.237022   97943 docker.go:233] disabling docker service ...
	I1210 00:07:09.237103   97943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 00:07:09.249910   97943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 00:07:09.261842   97943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 00:07:09.377487   97943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 00:07:09.489077   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 00:07:09.503310   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 00:07:09.520074   97943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 00:07:09.520146   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.529237   97943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 00:07:09.529299   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.538814   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.547790   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.557022   97943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 00:07:09.566274   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.575677   97943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.591166   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:07:09.600226   97943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 00:07:09.608899   97943 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 00:07:09.608959   97943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 00:07:09.621054   97943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 00:07:09.630324   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:07:09.745895   97943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 00:07:09.836812   97943 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 00:07:09.836886   97943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 00:07:09.841320   97943 start.go:563] Will wait 60s for crictl version
	I1210 00:07:09.841380   97943 ssh_runner.go:195] Run: which crictl
	I1210 00:07:09.845003   97943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:07:09.887045   97943 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 00:07:09.887158   97943 ssh_runner.go:195] Run: crio --version
	I1210 00:07:09.913628   97943 ssh_runner.go:195] Run: crio --version
	I1210 00:07:09.940544   97943 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 00:07:09.941808   97943 out.go:177]   - env NO_PROXY=192.168.39.187
	I1210 00:07:09.942959   97943 main.go:141] libmachine: (ha-070032-m02) Calling .GetIP
	I1210 00:07:09.945644   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:09.946026   97943 main.go:141] libmachine: (ha-070032-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:53:39", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:54 +0000 UTC Type:0 Mac:52:54:00:a4:53:39 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:ha-070032-m02 Clientid:01:52:54:00:a4:53:39}
	I1210 00:07:09.946058   97943 main.go:141] libmachine: (ha-070032-m02) DBG | domain ha-070032-m02 has defined IP address 192.168.39.198 and MAC address 52:54:00:a4:53:39 in network mk-ha-070032
	I1210 00:07:09.946322   97943 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 00:07:09.950215   97943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:07:09.961995   97943 mustload.go:65] Loading cluster: ha-070032
	I1210 00:07:09.962176   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:07:09.962427   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:07:09.962471   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:07:09.977140   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34015
	I1210 00:07:09.977521   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:07:09.978002   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:07:09.978024   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:07:09.978339   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:07:09.978526   97943 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:07:09.979937   97943 host.go:66] Checking if "ha-070032" exists ...
	I1210 00:07:09.980239   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:07:09.980281   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:07:09.994247   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36787
	I1210 00:07:09.994760   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:07:09.995248   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:07:09.995276   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:07:09.995617   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:07:09.995804   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:07:09.995981   97943 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032 for IP: 192.168.39.198
	I1210 00:07:09.995996   97943 certs.go:194] generating shared ca certs ...
	I1210 00:07:09.996013   97943 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:07:09.996181   97943 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 00:07:09.996237   97943 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 00:07:09.996250   97943 certs.go:256] generating profile certs ...
	I1210 00:07:09.996340   97943 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key
	I1210 00:07:09.996369   97943 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.f9753880
	I1210 00:07:09.996386   97943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.f9753880 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.187 192.168.39.198 192.168.39.254]
	I1210 00:07:10.076485   97943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.f9753880 ...
	I1210 00:07:10.076513   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.f9753880: {Name:mk063fa61de97dbebc815f8cdc0b8ad5f6ad42dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:07:10.076683   97943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.f9753880 ...
	I1210 00:07:10.076697   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.f9753880: {Name:mk6197070a633b3c7bff009f36273929319901d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:07:10.076768   97943 certs.go:381] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.f9753880 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt
	I1210 00:07:10.076894   97943 certs.go:385] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.f9753880 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key
	I1210 00:07:10.077019   97943 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key
	I1210 00:07:10.077036   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 00:07:10.077051   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 00:07:10.077064   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 00:07:10.077079   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 00:07:10.077092   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 00:07:10.077105   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 00:07:10.077118   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 00:07:10.077130   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 00:07:10.077177   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 00:07:10.077207   97943 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 00:07:10.077219   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:07:10.077240   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 00:07:10.077261   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:07:10.077283   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 00:07:10.077318   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:07:10.077343   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:07:10.077356   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem -> /usr/share/ca-certificates/86296.pem
	I1210 00:07:10.077368   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /usr/share/ca-certificates/862962.pem
	I1210 00:07:10.077402   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:07:10.080314   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:07:10.080656   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:07:10.080686   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:07:10.080849   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:07:10.081053   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:07:10.081213   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:07:10.081346   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:07:10.150955   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1210 00:07:10.156109   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1210 00:07:10.172000   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1210 00:07:10.175843   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1210 00:07:10.191569   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1210 00:07:10.195845   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1210 00:07:10.205344   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1210 00:07:10.208990   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1210 00:07:10.218513   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1210 00:07:10.222172   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1210 00:07:10.231444   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1210 00:07:10.235751   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1210 00:07:10.245673   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:07:10.268586   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 00:07:10.289301   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:07:10.309755   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:07:10.330372   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1210 00:07:10.350734   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 00:07:10.370944   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:07:10.391160   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:07:10.411354   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:07:10.431480   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 00:07:10.453051   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 00:07:10.473317   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1210 00:07:10.487731   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1210 00:07:10.501999   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1210 00:07:10.516876   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1210 00:07:10.531860   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1210 00:07:10.546723   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1210 00:07:10.561653   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1210 00:07:10.575903   97943 ssh_runner.go:195] Run: openssl version
	I1210 00:07:10.580966   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 00:07:10.590633   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 00:07:10.594516   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 00:07:10.594555   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 00:07:10.599765   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:07:10.609423   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:07:10.619123   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:07:10.623118   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:07:10.623159   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:07:10.628240   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:07:10.637834   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 00:07:10.647418   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 00:07:10.651160   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 00:07:10.651204   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 00:07:10.656233   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 00:07:10.666013   97943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:07:10.669458   97943 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 00:07:10.669508   97943 kubeadm.go:934] updating node {m02 192.168.39.198 8443 v1.31.2 crio true true} ...
	I1210 00:07:10.669598   97943 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-070032-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:07:10.669628   97943 kube-vip.go:115] generating kube-vip config ...
	I1210 00:07:10.669651   97943 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1210 00:07:10.689973   97943 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1210 00:07:10.690046   97943 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1210 00:07:10.690097   97943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:07:10.699806   97943 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1210 00:07:10.699859   97943 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1210 00:07:10.709208   97943 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1210 00:07:10.709234   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1210 00:07:10.709289   97943 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1210 00:07:10.709322   97943 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1210 00:07:10.709296   97943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1210 00:07:10.713239   97943 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1210 00:07:10.713260   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1210 00:07:11.639149   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1210 00:07:11.639234   97943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1210 00:07:11.643871   97943 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1210 00:07:11.643902   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1210 00:07:11.758059   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:07:11.787926   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1210 00:07:11.788041   97943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1210 00:07:11.795093   97943 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1210 00:07:11.795140   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1210 00:07:12.180780   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1210 00:07:12.189342   97943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1210 00:07:12.205977   97943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:07:12.220614   97943 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1210 00:07:12.235844   97943 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1210 00:07:12.239089   97943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:07:12.251338   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:07:12.381143   97943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:07:12.396098   97943 host.go:66] Checking if "ha-070032" exists ...
	I1210 00:07:12.396594   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:07:12.396651   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:07:12.412619   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39619
	I1210 00:07:12.413166   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:07:12.413744   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:07:12.413766   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:07:12.414184   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:07:12.414391   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:07:12.414627   97943 start.go:317] joinCluster: &{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:07:12.414728   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1210 00:07:12.414747   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:07:12.418002   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:07:12.418418   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:07:12.418450   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:07:12.418629   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:07:12.418810   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:07:12.418994   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:07:12.419164   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:07:12.570827   97943 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:07:12.570886   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tdi3w2.l01zdw261ipf0ila --discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-070032-m02 --control-plane --apiserver-advertise-address=192.168.39.198 --apiserver-bind-port=8443"
	I1210 00:07:32.921639   97943 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tdi3w2.l01zdw261ipf0ila --discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-070032-m02 --control-plane --apiserver-advertise-address=192.168.39.198 --apiserver-bind-port=8443": (20.350728679s)
	I1210 00:07:32.921682   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1210 00:07:33.411739   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-070032-m02 minikube.k8s.io/updated_at=2024_12_10T00_07_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=ha-070032 minikube.k8s.io/primary=false
	I1210 00:07:33.552589   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-070032-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1210 00:07:33.681991   97943 start.go:319] duration metric: took 21.26735926s to joinCluster
	I1210 00:07:33.682079   97943 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:07:33.682486   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:07:33.683556   97943 out.go:177] * Verifying Kubernetes components...
	I1210 00:07:33.684723   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:07:33.911972   97943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:07:33.951142   97943 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:07:33.951400   97943 kapi.go:59] client config for ha-070032: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.crt", KeyFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key", CAFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1210 00:07:33.951471   97943 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.187:8443
	I1210 00:07:33.951667   97943 node_ready.go:35] waiting up to 6m0s for node "ha-070032-m02" to be "Ready" ...
	I1210 00:07:33.951780   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:33.951788   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:33.951796   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:33.951800   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:33.961739   97943 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1210 00:07:34.452167   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:34.452198   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:34.452211   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:34.452219   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:34.456196   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:34.952070   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:34.952094   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:34.952105   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:34.952111   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:34.957522   97943 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1210 00:07:35.452860   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:35.452883   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:35.452890   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:35.452894   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:35.456005   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:35.952021   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:35.952048   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:35.952058   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:35.952063   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:35.955318   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:35.955854   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:36.452184   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:36.452211   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:36.452222   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:36.452229   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:36.455126   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:36.951926   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:36.951955   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:36.951966   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:36.951973   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:36.956909   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:07:37.452305   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:37.452330   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:37.452341   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:37.452348   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:37.458679   97943 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1210 00:07:37.952074   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:37.952096   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:37.952105   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:37.952111   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:37.954863   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:38.452953   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:38.452983   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:38.452996   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:38.453003   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:38.455946   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:38.456796   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:38.952594   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:38.952617   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:38.952626   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:38.952630   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:38.955438   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:39.452632   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:39.452657   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:39.452669   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:39.452675   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:39.455716   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:39.952848   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:39.952879   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:39.952893   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:39.952899   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:39.956221   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:40.452071   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:40.452095   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:40.452105   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:40.452112   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:40.455375   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:40.952464   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:40.952488   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:40.952507   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:40.952512   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:40.955445   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:40.956051   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:41.452509   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:41.452534   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:41.452542   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:41.452547   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:41.455649   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:41.952634   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:41.952657   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:41.952666   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:41.952669   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:41.955344   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:42.452001   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:42.452023   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:42.452032   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:42.452036   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:42.454753   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:42.952401   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:42.952423   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:42.952436   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:42.952440   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:42.955178   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:43.451951   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:43.451974   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:43.451982   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:43.451986   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:43.454333   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:43.454867   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:43.951938   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:43.951963   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:43.951973   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:43.951978   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:43.954971   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:44.452196   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:44.452218   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:44.452225   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:44.452230   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:44.455145   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:44.952295   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:44.952319   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:44.952327   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:44.952331   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:44.955347   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:45.452137   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:45.452165   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:45.452176   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:45.452181   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:45.477510   97943 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I1210 00:07:45.477938   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:45.952299   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:45.952324   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:45.952332   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:45.952335   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:45.955321   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:46.452358   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:46.452384   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:46.452393   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:46.452397   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:46.455541   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:46.952608   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:46.952634   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:46.952643   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:46.952647   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:46.957412   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:07:47.452449   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:47.452471   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:47.452480   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:47.452484   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:47.455610   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:47.952117   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:47.952140   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:47.952153   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:47.952158   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:47.955292   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:47.956098   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:48.452506   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:48.452532   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:48.452539   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:48.452543   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:48.455102   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:48.952221   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:48.952248   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:48.952258   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:48.952265   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:48.955311   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:49.452304   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:49.452327   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:49.452335   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:49.452340   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:49.455564   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:49.952482   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:49.952504   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:49.952512   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:49.952516   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:49.955476   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:50.452216   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:50.452240   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:50.452248   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:50.452252   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:50.455231   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:50.455908   97943 node_ready.go:53] node "ha-070032-m02" has status "Ready":"False"
	I1210 00:07:50.952301   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:50.952323   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:50.952331   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:50.952335   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:50.955916   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:51.452010   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:51.452030   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.452039   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.452042   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.454528   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.455097   97943 node_ready.go:49] node "ha-070032-m02" has status "Ready":"True"
	I1210 00:07:51.455120   97943 node_ready.go:38] duration metric: took 17.50342824s for node "ha-070032-m02" to be "Ready" ...
	I1210 00:07:51.455132   97943 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:07:51.455240   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:07:51.455254   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.455263   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.455267   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.459208   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:51.466339   97943 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fs6l6" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.466409   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs6l6
	I1210 00:07:51.466417   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.466423   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.466427   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.469050   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.469653   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:51.469667   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.469674   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.469678   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.472023   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.472637   97943 pod_ready.go:93] pod "coredns-7c65d6cfc9-fs6l6" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:51.472656   97943 pod_ready.go:82] duration metric: took 6.295928ms for pod "coredns-7c65d6cfc9-fs6l6" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.472667   97943 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nqnhw" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.472740   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nqnhw
	I1210 00:07:51.472751   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.472759   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.472768   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.475075   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.475717   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:51.475733   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.475739   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.475743   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.477769   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.478274   97943 pod_ready.go:93] pod "coredns-7c65d6cfc9-nqnhw" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:51.478291   97943 pod_ready.go:82] duration metric: took 5.614539ms for pod "coredns-7c65d6cfc9-nqnhw" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.478301   97943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.478367   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/etcd-ha-070032
	I1210 00:07:51.478379   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.478388   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.478394   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.480522   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.481177   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:51.481192   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.481202   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.481209   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.483181   97943 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1210 00:07:51.483658   97943 pod_ready.go:93] pod "etcd-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:51.483673   97943 pod_ready.go:82] duration metric: took 5.36618ms for pod "etcd-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.483680   97943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.483721   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/etcd-ha-070032-m02
	I1210 00:07:51.483729   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.483736   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.483740   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.485816   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.486281   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:51.486294   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.486301   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.486305   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.488586   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.489007   97943 pod_ready.go:93] pod "etcd-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:51.489022   97943 pod_ready.go:82] duration metric: took 5.33676ms for pod "etcd-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.489033   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.652421   97943 request.go:632] Waited for 163.314648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032
	I1210 00:07:51.652507   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032
	I1210 00:07:51.652514   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.652522   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.652529   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.655875   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:51.852945   97943 request.go:632] Waited for 196.352422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:51.853007   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:51.853013   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:51.853021   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:51.853024   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:51.855755   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:51.856291   97943 pod_ready.go:93] pod "kube-apiserver-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:51.856309   97943 pod_ready.go:82] duration metric: took 367.27061ms for pod "kube-apiserver-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:51.856319   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:52.052337   97943 request.go:632] Waited for 195.923221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032-m02
	I1210 00:07:52.052427   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032-m02
	I1210 00:07:52.052445   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:52.052456   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:52.052464   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:52.055099   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:52.252077   97943 request.go:632] Waited for 196.296135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:52.252149   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:52.252156   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:52.252167   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:52.252174   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:52.255050   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:52.255574   97943 pod_ready.go:93] pod "kube-apiserver-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:52.255594   97943 pod_ready.go:82] duration metric: took 399.267887ms for pod "kube-apiserver-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:52.255606   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:52.452073   97943 request.go:632] Waited for 196.39546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032
	I1210 00:07:52.452157   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032
	I1210 00:07:52.452173   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:52.452186   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:52.452244   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:52.458811   97943 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1210 00:07:52.652632   97943 request.go:632] Waited for 193.214443ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:52.652697   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:52.652702   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:52.652711   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:52.652716   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:52.655373   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:52.655983   97943 pod_ready.go:93] pod "kube-controller-manager-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:52.656003   97943 pod_ready.go:82] duration metric: took 400.387415ms for pod "kube-controller-manager-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:52.656017   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:52.852497   97943 request.go:632] Waited for 196.400538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032-m02
	I1210 00:07:52.852597   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032-m02
	I1210 00:07:52.852602   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:52.852610   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:52.852615   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:52.855857   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:53.052833   97943 request.go:632] Waited for 196.298843ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:53.052897   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:53.052903   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:53.052910   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:53.052914   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:53.055870   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:53.056472   97943 pod_ready.go:93] pod "kube-controller-manager-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:53.056497   97943 pod_ready.go:82] duration metric: took 400.471759ms for pod "kube-controller-manager-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:53.056510   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7fm88" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:53.252421   97943 request.go:632] Waited for 195.828491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7fm88
	I1210 00:07:53.252528   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7fm88
	I1210 00:07:53.252541   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:53.252551   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:53.252557   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:53.255434   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:07:53.452445   97943 request.go:632] Waited for 196.391925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:53.452546   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:53.452560   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:53.452570   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:53.452575   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:53.456118   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:53.456572   97943 pod_ready.go:93] pod "kube-proxy-7fm88" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:53.456590   97943 pod_ready.go:82] duration metric: took 400.071362ms for pod "kube-proxy-7fm88" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:53.456605   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xsxdp" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:53.652799   97943 request.go:632] Waited for 196.033566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsxdp
	I1210 00:07:53.652870   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsxdp
	I1210 00:07:53.652877   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:53.652889   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:53.652897   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:53.656566   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:53.852630   97943 request.go:632] Waited for 195.347256ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:53.852735   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:53.852743   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:53.852750   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:53.852754   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:53.856029   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:53.856560   97943 pod_ready.go:93] pod "kube-proxy-xsxdp" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:53.856580   97943 pod_ready.go:82] duration metric: took 399.967291ms for pod "kube-proxy-xsxdp" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:53.856593   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:54.052778   97943 request.go:632] Waited for 196.074454ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032
	I1210 00:07:54.052856   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032
	I1210 00:07:54.052864   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:54.052876   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:54.052886   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:54.056269   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:54.252099   97943 request.go:632] Waited for 195.297548ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:54.252166   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:07:54.252172   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:54.252179   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:54.252194   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:54.256109   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:54.256828   97943 pod_ready.go:93] pod "kube-scheduler-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:54.256845   97943 pod_ready.go:82] duration metric: took 400.243574ms for pod "kube-scheduler-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:54.256855   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:54.452369   97943 request.go:632] Waited for 195.428155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032-m02
	I1210 00:07:54.452450   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032-m02
	I1210 00:07:54.452455   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:54.452462   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:54.452469   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:54.455694   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:54.652684   97943 request.go:632] Waited for 196.354028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:54.652789   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:07:54.652798   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:54.652807   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:54.652815   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:54.655871   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:54.656329   97943 pod_ready.go:93] pod "kube-scheduler-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:07:54.656346   97943 pod_ready.go:82] duration metric: took 399.484539ms for pod "kube-scheduler-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:07:54.656357   97943 pod_ready.go:39] duration metric: took 3.201198757s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:07:54.656372   97943 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:07:54.656424   97943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:07:54.671199   97943 api_server.go:72] duration metric: took 20.989077821s to wait for apiserver process to appear ...
	I1210 00:07:54.671227   97943 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:07:54.671247   97943 api_server.go:253] Checking apiserver healthz at https://192.168.39.187:8443/healthz ...
	I1210 00:07:54.675276   97943 api_server.go:279] https://192.168.39.187:8443/healthz returned 200:
	ok
	I1210 00:07:54.675337   97943 round_trippers.go:463] GET https://192.168.39.187:8443/version
	I1210 00:07:54.675341   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:54.675349   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:54.675356   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:54.676142   97943 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1210 00:07:54.676268   97943 api_server.go:141] control plane version: v1.31.2
	I1210 00:07:54.676284   97943 api_server.go:131] duration metric: took 5.052294ms to wait for apiserver health ...
	I1210 00:07:54.676295   97943 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:07:54.852698   97943 request.go:632] Waited for 176.309011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:07:54.852754   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:07:54.852758   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:54.852767   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:54.852774   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:54.857339   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:07:54.861880   97943 system_pods.go:59] 17 kube-system pods found
	I1210 00:07:54.861907   97943 system_pods.go:61] "coredns-7c65d6cfc9-fs6l6" [9a1cf8b4-0a76-41a1-930f-1574e31db324] Running
	I1210 00:07:54.861912   97943 system_pods.go:61] "coredns-7c65d6cfc9-nqnhw" [2c81e85b-ea31-43b6-9467-97ff40b0b4a0] Running
	I1210 00:07:54.861916   97943 system_pods.go:61] "etcd-ha-070032" [db08368f-b4a3-4d4b-8863-3a2bef1f832b] Running
	I1210 00:07:54.861920   97943 system_pods.go:61] "etcd-ha-070032-m02" [22d5e0f8-0395-40fe-b734-1718a89c251a] Running
	I1210 00:07:54.861952   97943 system_pods.go:61] "kindnet-69btk" [23838518-3372-48bc-986e-e4688e0963bb] Running
	I1210 00:07:54.861962   97943 system_pods.go:61] "kindnet-r97q9" [566672d9-989b-4337-84f4-bafd5c70755f] Running
	I1210 00:07:54.861965   97943 system_pods.go:61] "kube-apiserver-ha-070032" [e06e8916-31d6-4690-bc97-ed55126af827] Running
	I1210 00:07:54.861969   97943 system_pods.go:61] "kube-apiserver-ha-070032-m02" [b2c543a7-2d54-44a4-9369-3115629d79bb] Running
	I1210 00:07:54.861972   97943 system_pods.go:61] "kube-controller-manager-ha-070032" [d01a145f-816f-41ce-83db-c81713564109] Running
	I1210 00:07:54.861979   97943 system_pods.go:61] "kube-controller-manager-ha-070032-m02" [d8af77ba-8496-4383-a8a8-387110a2a026] Running
	I1210 00:07:54.861982   97943 system_pods.go:61] "kube-proxy-7fm88" [e935cde6-5a4b-4387-93a9-26ca701c54ac] Running
	I1210 00:07:54.861985   97943 system_pods.go:61] "kube-proxy-xsxdp" [9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5] Running
	I1210 00:07:54.861988   97943 system_pods.go:61] "kube-scheduler-ha-070032" [28986558-f062-4f8e-9fb1-413a22311d7d] Running
	I1210 00:07:54.861992   97943 system_pods.go:61] "kube-scheduler-ha-070032-m02" [0600e010-0e55-44d9-ac1f-1e62c0139dd1] Running
	I1210 00:07:54.861997   97943 system_pods.go:61] "kube-vip-ha-070032" [dcecf230-f635-42a1-8fd4-567e3409d086] Running
	I1210 00:07:54.862000   97943 system_pods.go:61] "kube-vip-ha-070032-m02" [af656c29-7151-4ef7-8fa6-91187358671e] Running
	I1210 00:07:54.862003   97943 system_pods.go:61] "storage-provisioner" [920be345-6e4a-425f-bcde-0e5463fd92b7] Running
	I1210 00:07:54.862009   97943 system_pods.go:74] duration metric: took 185.705934ms to wait for pod list to return data ...
	I1210 00:07:54.862019   97943 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:07:55.052828   97943 request.go:632] Waited for 190.716484ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/default/serviceaccounts
	I1210 00:07:55.052905   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/default/serviceaccounts
	I1210 00:07:55.052910   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:55.052920   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:55.052925   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:55.056476   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:55.056707   97943 default_sa.go:45] found service account: "default"
	I1210 00:07:55.056722   97943 default_sa.go:55] duration metric: took 194.697141ms for default service account to be created ...
	I1210 00:07:55.056734   97943 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:07:55.252140   97943 request.go:632] Waited for 195.318975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:07:55.252222   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:07:55.252228   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:55.252235   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:55.252246   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:55.256177   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:55.260950   97943 system_pods.go:86] 17 kube-system pods found
	I1210 00:07:55.260986   97943 system_pods.go:89] "coredns-7c65d6cfc9-fs6l6" [9a1cf8b4-0a76-41a1-930f-1574e31db324] Running
	I1210 00:07:55.260993   97943 system_pods.go:89] "coredns-7c65d6cfc9-nqnhw" [2c81e85b-ea31-43b6-9467-97ff40b0b4a0] Running
	I1210 00:07:55.260998   97943 system_pods.go:89] "etcd-ha-070032" [db08368f-b4a3-4d4b-8863-3a2bef1f832b] Running
	I1210 00:07:55.261002   97943 system_pods.go:89] "etcd-ha-070032-m02" [22d5e0f8-0395-40fe-b734-1718a89c251a] Running
	I1210 00:07:55.261005   97943 system_pods.go:89] "kindnet-69btk" [23838518-3372-48bc-986e-e4688e0963bb] Running
	I1210 00:07:55.261009   97943 system_pods.go:89] "kindnet-r97q9" [566672d9-989b-4337-84f4-bafd5c70755f] Running
	I1210 00:07:55.261013   97943 system_pods.go:89] "kube-apiserver-ha-070032" [e06e8916-31d6-4690-bc97-ed55126af827] Running
	I1210 00:07:55.261017   97943 system_pods.go:89] "kube-apiserver-ha-070032-m02" [b2c543a7-2d54-44a4-9369-3115629d79bb] Running
	I1210 00:07:55.261021   97943 system_pods.go:89] "kube-controller-manager-ha-070032" [d01a145f-816f-41ce-83db-c81713564109] Running
	I1210 00:07:55.261025   97943 system_pods.go:89] "kube-controller-manager-ha-070032-m02" [d8af77ba-8496-4383-a8a8-387110a2a026] Running
	I1210 00:07:55.261028   97943 system_pods.go:89] "kube-proxy-7fm88" [e935cde6-5a4b-4387-93a9-26ca701c54ac] Running
	I1210 00:07:55.261032   97943 system_pods.go:89] "kube-proxy-xsxdp" [9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5] Running
	I1210 00:07:55.261035   97943 system_pods.go:89] "kube-scheduler-ha-070032" [28986558-f062-4f8e-9fb1-413a22311d7d] Running
	I1210 00:07:55.261038   97943 system_pods.go:89] "kube-scheduler-ha-070032-m02" [0600e010-0e55-44d9-ac1f-1e62c0139dd1] Running
	I1210 00:07:55.261041   97943 system_pods.go:89] "kube-vip-ha-070032" [dcecf230-f635-42a1-8fd4-567e3409d086] Running
	I1210 00:07:55.261044   97943 system_pods.go:89] "kube-vip-ha-070032-m02" [af656c29-7151-4ef7-8fa6-91187358671e] Running
	I1210 00:07:55.261047   97943 system_pods.go:89] "storage-provisioner" [920be345-6e4a-425f-bcde-0e5463fd92b7] Running
	I1210 00:07:55.261054   97943 system_pods.go:126] duration metric: took 204.311621ms to wait for k8s-apps to be running ...
	I1210 00:07:55.261063   97943 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:07:55.261104   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:07:55.274767   97943 system_svc.go:56] duration metric: took 13.694234ms WaitForService to wait for kubelet
	I1210 00:07:55.274800   97943 kubeadm.go:582] duration metric: took 21.592682957s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:07:55.274820   97943 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:07:55.452205   97943 request.go:632] Waited for 177.292861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes
	I1210 00:07:55.452266   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes
	I1210 00:07:55.452271   97943 round_trippers.go:469] Request Headers:
	I1210 00:07:55.452278   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:07:55.452283   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:07:55.455802   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:07:55.456649   97943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:07:55.456674   97943 node_conditions.go:123] node cpu capacity is 2
	I1210 00:07:55.456687   97943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:07:55.456691   97943 node_conditions.go:123] node cpu capacity is 2
	I1210 00:07:55.456696   97943 node_conditions.go:105] duration metric: took 181.87045ms to run NodePressure ...
	I1210 00:07:55.456708   97943 start.go:241] waiting for startup goroutines ...
	I1210 00:07:55.456739   97943 start.go:255] writing updated cluster config ...
	I1210 00:07:55.458841   97943 out.go:201] 
	I1210 00:07:55.460254   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:07:55.460350   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:07:55.461990   97943 out.go:177] * Starting "ha-070032-m03" control-plane node in "ha-070032" cluster
	I1210 00:07:55.463162   97943 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:07:55.463187   97943 cache.go:56] Caching tarball of preloaded images
	I1210 00:07:55.463285   97943 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 00:07:55.463296   97943 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 00:07:55.463384   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:07:55.463555   97943 start.go:360] acquireMachinesLock for ha-070032-m03: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:07:55.463598   97943 start.go:364] duration metric: took 23.179µs to acquireMachinesLock for "ha-070032-m03"
	I1210 00:07:55.463615   97943 start.go:93] Provisioning new machine with config: &{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:07:55.463708   97943 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1210 00:07:55.465955   97943 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1210 00:07:55.466061   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:07:55.466099   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:07:55.482132   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
	I1210 00:07:55.482649   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:07:55.483189   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:07:55.483214   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:07:55.483546   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:07:55.483725   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetMachineName
	I1210 00:07:55.483847   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:07:55.483970   97943 start.go:159] libmachine.API.Create for "ha-070032" (driver="kvm2")
	I1210 00:07:55.484001   97943 client.go:168] LocalClient.Create starting
	I1210 00:07:55.484030   97943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem
	I1210 00:07:55.484063   97943 main.go:141] libmachine: Decoding PEM data...
	I1210 00:07:55.484076   97943 main.go:141] libmachine: Parsing certificate...
	I1210 00:07:55.484129   97943 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem
	I1210 00:07:55.484150   97943 main.go:141] libmachine: Decoding PEM data...
	I1210 00:07:55.484160   97943 main.go:141] libmachine: Parsing certificate...
	I1210 00:07:55.484177   97943 main.go:141] libmachine: Running pre-create checks...
	I1210 00:07:55.484187   97943 main.go:141] libmachine: (ha-070032-m03) Calling .PreCreateCheck
	I1210 00:07:55.484346   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetConfigRaw
	I1210 00:07:55.484732   97943 main.go:141] libmachine: Creating machine...
	I1210 00:07:55.484749   97943 main.go:141] libmachine: (ha-070032-m03) Calling .Create
	I1210 00:07:55.484892   97943 main.go:141] libmachine: (ha-070032-m03) Creating KVM machine...
	I1210 00:07:55.486009   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found existing default KVM network
	I1210 00:07:55.486135   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found existing private KVM network mk-ha-070032
	I1210 00:07:55.486275   97943 main.go:141] libmachine: (ha-070032-m03) Setting up store path in /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03 ...
	I1210 00:07:55.486315   97943 main.go:141] libmachine: (ha-070032-m03) Building disk image from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1210 00:07:55.486369   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:55.486273   98753 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:07:55.486441   97943 main.go:141] libmachine: (ha-070032-m03) Downloading /home/jenkins/minikube-integration/20062-79135/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1210 00:07:55.750942   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:55.750806   98753 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa...
	I1210 00:07:55.823142   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:55.822993   98753 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/ha-070032-m03.rawdisk...
	I1210 00:07:55.823184   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Writing magic tar header
	I1210 00:07:55.823200   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Writing SSH key tar header
	I1210 00:07:55.823214   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:55.823115   98753 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03 ...
	I1210 00:07:55.823231   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03
	I1210 00:07:55.823252   97943 main.go:141] libmachine: (ha-070032-m03) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03 (perms=drwx------)
	I1210 00:07:55.823278   97943 main.go:141] libmachine: (ha-070032-m03) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines (perms=drwxr-xr-x)
	I1210 00:07:55.823337   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines
	I1210 00:07:55.823363   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:07:55.823375   97943 main.go:141] libmachine: (ha-070032-m03) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube (perms=drwxr-xr-x)
	I1210 00:07:55.823392   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135
	I1210 00:07:55.823405   97943 main.go:141] libmachine: (ha-070032-m03) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135 (perms=drwxrwxr-x)
	I1210 00:07:55.823415   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1210 00:07:55.823431   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home/jenkins
	I1210 00:07:55.823442   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Checking permissions on dir: /home
	I1210 00:07:55.823456   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Skipping /home - not owner
	I1210 00:07:55.823471   97943 main.go:141] libmachine: (ha-070032-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 00:07:55.823488   97943 main.go:141] libmachine: (ha-070032-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 00:07:55.823501   97943 main.go:141] libmachine: (ha-070032-m03) Creating domain...
	I1210 00:07:55.824547   97943 main.go:141] libmachine: (ha-070032-m03) define libvirt domain using xml: 
	I1210 00:07:55.824562   97943 main.go:141] libmachine: (ha-070032-m03) <domain type='kvm'>
	I1210 00:07:55.824568   97943 main.go:141] libmachine: (ha-070032-m03)   <name>ha-070032-m03</name>
	I1210 00:07:55.824572   97943 main.go:141] libmachine: (ha-070032-m03)   <memory unit='MiB'>2200</memory>
	I1210 00:07:55.824578   97943 main.go:141] libmachine: (ha-070032-m03)   <vcpu>2</vcpu>
	I1210 00:07:55.824582   97943 main.go:141] libmachine: (ha-070032-m03)   <features>
	I1210 00:07:55.824588   97943 main.go:141] libmachine: (ha-070032-m03)     <acpi/>
	I1210 00:07:55.824594   97943 main.go:141] libmachine: (ha-070032-m03)     <apic/>
	I1210 00:07:55.824599   97943 main.go:141] libmachine: (ha-070032-m03)     <pae/>
	I1210 00:07:55.824605   97943 main.go:141] libmachine: (ha-070032-m03)     
	I1210 00:07:55.824615   97943 main.go:141] libmachine: (ha-070032-m03)   </features>
	I1210 00:07:55.824649   97943 main.go:141] libmachine: (ha-070032-m03)   <cpu mode='host-passthrough'>
	I1210 00:07:55.824662   97943 main.go:141] libmachine: (ha-070032-m03)   
	I1210 00:07:55.824670   97943 main.go:141] libmachine: (ha-070032-m03)   </cpu>
	I1210 00:07:55.824678   97943 main.go:141] libmachine: (ha-070032-m03)   <os>
	I1210 00:07:55.824685   97943 main.go:141] libmachine: (ha-070032-m03)     <type>hvm</type>
	I1210 00:07:55.824690   97943 main.go:141] libmachine: (ha-070032-m03)     <boot dev='cdrom'/>
	I1210 00:07:55.824697   97943 main.go:141] libmachine: (ha-070032-m03)     <boot dev='hd'/>
	I1210 00:07:55.824703   97943 main.go:141] libmachine: (ha-070032-m03)     <bootmenu enable='no'/>
	I1210 00:07:55.824709   97943 main.go:141] libmachine: (ha-070032-m03)   </os>
	I1210 00:07:55.824714   97943 main.go:141] libmachine: (ha-070032-m03)   <devices>
	I1210 00:07:55.824720   97943 main.go:141] libmachine: (ha-070032-m03)     <disk type='file' device='cdrom'>
	I1210 00:07:55.824728   97943 main.go:141] libmachine: (ha-070032-m03)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/boot2docker.iso'/>
	I1210 00:07:55.824735   97943 main.go:141] libmachine: (ha-070032-m03)       <target dev='hdc' bus='scsi'/>
	I1210 00:07:55.824740   97943 main.go:141] libmachine: (ha-070032-m03)       <readonly/>
	I1210 00:07:55.824746   97943 main.go:141] libmachine: (ha-070032-m03)     </disk>
	I1210 00:07:55.824753   97943 main.go:141] libmachine: (ha-070032-m03)     <disk type='file' device='disk'>
	I1210 00:07:55.824761   97943 main.go:141] libmachine: (ha-070032-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1210 00:07:55.824769   97943 main.go:141] libmachine: (ha-070032-m03)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/ha-070032-m03.rawdisk'/>
	I1210 00:07:55.824776   97943 main.go:141] libmachine: (ha-070032-m03)       <target dev='hda' bus='virtio'/>
	I1210 00:07:55.824780   97943 main.go:141] libmachine: (ha-070032-m03)     </disk>
	I1210 00:07:55.824787   97943 main.go:141] libmachine: (ha-070032-m03)     <interface type='network'>
	I1210 00:07:55.824793   97943 main.go:141] libmachine: (ha-070032-m03)       <source network='mk-ha-070032'/>
	I1210 00:07:55.824799   97943 main.go:141] libmachine: (ha-070032-m03)       <model type='virtio'/>
	I1210 00:07:55.824804   97943 main.go:141] libmachine: (ha-070032-m03)     </interface>
	I1210 00:07:55.824809   97943 main.go:141] libmachine: (ha-070032-m03)     <interface type='network'>
	I1210 00:07:55.824814   97943 main.go:141] libmachine: (ha-070032-m03)       <source network='default'/>
	I1210 00:07:55.824819   97943 main.go:141] libmachine: (ha-070032-m03)       <model type='virtio'/>
	I1210 00:07:55.824824   97943 main.go:141] libmachine: (ha-070032-m03)     </interface>
	I1210 00:07:55.824830   97943 main.go:141] libmachine: (ha-070032-m03)     <serial type='pty'>
	I1210 00:07:55.824835   97943 main.go:141] libmachine: (ha-070032-m03)       <target port='0'/>
	I1210 00:07:55.824842   97943 main.go:141] libmachine: (ha-070032-m03)     </serial>
	I1210 00:07:55.824846   97943 main.go:141] libmachine: (ha-070032-m03)     <console type='pty'>
	I1210 00:07:55.824852   97943 main.go:141] libmachine: (ha-070032-m03)       <target type='serial' port='0'/>
	I1210 00:07:55.824859   97943 main.go:141] libmachine: (ha-070032-m03)     </console>
	I1210 00:07:55.824863   97943 main.go:141] libmachine: (ha-070032-m03)     <rng model='virtio'>
	I1210 00:07:55.824871   97943 main.go:141] libmachine: (ha-070032-m03)       <backend model='random'>/dev/random</backend>
	I1210 00:07:55.824874   97943 main.go:141] libmachine: (ha-070032-m03)     </rng>
	I1210 00:07:55.824881   97943 main.go:141] libmachine: (ha-070032-m03)     
	I1210 00:07:55.824884   97943 main.go:141] libmachine: (ha-070032-m03)     
	I1210 00:07:55.824891   97943 main.go:141] libmachine: (ha-070032-m03)   </devices>
	I1210 00:07:55.824895   97943 main.go:141] libmachine: (ha-070032-m03) </domain>
	I1210 00:07:55.824901   97943 main.go:141] libmachine: (ha-070032-m03) 
	I1210 00:07:55.831443   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:5a:d9:d9 in network default
	I1210 00:07:55.832042   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:55.832057   97943 main.go:141] libmachine: (ha-070032-m03) Ensuring networks are active...
	I1210 00:07:55.832934   97943 main.go:141] libmachine: (ha-070032-m03) Ensuring network default is active
	I1210 00:07:55.833292   97943 main.go:141] libmachine: (ha-070032-m03) Ensuring network mk-ha-070032 is active
	I1210 00:07:55.833793   97943 main.go:141] libmachine: (ha-070032-m03) Getting domain xml...
	I1210 00:07:55.834538   97943 main.go:141] libmachine: (ha-070032-m03) Creating domain...
	I1210 00:07:57.048312   97943 main.go:141] libmachine: (ha-070032-m03) Waiting to get IP...
	I1210 00:07:57.049343   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:57.049867   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:57.049936   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:57.049857   98753 retry.go:31] will retry after 285.89703ms: waiting for machine to come up
	I1210 00:07:57.337509   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:57.337895   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:57.337921   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:57.337875   98753 retry.go:31] will retry after 339.218188ms: waiting for machine to come up
	I1210 00:07:57.678323   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:57.678856   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:57.678881   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:57.678806   98753 retry.go:31] will retry after 294.170833ms: waiting for machine to come up
	I1210 00:07:57.974134   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:57.974660   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:57.974681   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:57.974611   98753 retry.go:31] will retry after 408.745882ms: waiting for machine to come up
	I1210 00:07:58.385123   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:58.385636   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:58.385664   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:58.385591   98753 retry.go:31] will retry after 527.821664ms: waiting for machine to come up
	I1210 00:07:58.915568   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:58.916006   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:58.916035   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:58.915961   98753 retry.go:31] will retry after 925.585874ms: waiting for machine to come up
	I1210 00:07:59.843180   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:07:59.843652   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:07:59.843679   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:07:59.843610   98753 retry.go:31] will retry after 870.720245ms: waiting for machine to come up
	I1210 00:08:00.715984   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:00.716446   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:00.716472   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:00.716425   98753 retry.go:31] will retry after 1.331743311s: waiting for machine to come up
	I1210 00:08:02.049640   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:02.050041   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:02.050067   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:02.049985   98753 retry.go:31] will retry after 1.76199987s: waiting for machine to come up
	I1210 00:08:03.813933   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:03.814414   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:03.814439   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:03.814370   98753 retry.go:31] will retry after 1.980303699s: waiting for machine to come up
	I1210 00:08:05.796494   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:05.797056   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:05.797086   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:05.797021   98753 retry.go:31] will retry after 2.086128516s: waiting for machine to come up
	I1210 00:08:07.884316   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:07.884692   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:07.884721   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:07.884642   98753 retry.go:31] will retry after 2.780301455s: waiting for machine to come up
	I1210 00:08:10.666546   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:10.666965   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:10.666996   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:10.666924   98753 retry.go:31] will retry after 4.142573793s: waiting for machine to come up
	I1210 00:08:14.811574   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:14.811965   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find current IP address of domain ha-070032-m03 in network mk-ha-070032
	I1210 00:08:14.811989   97943 main.go:141] libmachine: (ha-070032-m03) DBG | I1210 00:08:14.811918   98753 retry.go:31] will retry after 5.321214881s: waiting for machine to come up
	I1210 00:08:20.135607   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.136014   97943 main.go:141] libmachine: (ha-070032-m03) Found IP for machine: 192.168.39.244
	I1210 00:08:20.136038   97943 main.go:141] libmachine: (ha-070032-m03) Reserving static IP address...
	I1210 00:08:20.136048   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has current primary IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.136451   97943 main.go:141] libmachine: (ha-070032-m03) DBG | unable to find host DHCP lease matching {name: "ha-070032-m03", mac: "52:54:00:36:e7:81", ip: "192.168.39.244"} in network mk-ha-070032
	I1210 00:08:20.209941   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Getting to WaitForSSH function...
	I1210 00:08:20.209976   97943 main.go:141] libmachine: (ha-070032-m03) Reserved static IP address: 192.168.39.244
	I1210 00:08:20.209989   97943 main.go:141] libmachine: (ha-070032-m03) Waiting for SSH to be available...
	I1210 00:08:20.212879   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.213267   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:minikube Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.213298   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.213460   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Using SSH client type: external
	I1210 00:08:20.213487   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa (-rw-------)
	I1210 00:08:20.213527   97943 main.go:141] libmachine: (ha-070032-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:08:20.213547   97943 main.go:141] libmachine: (ha-070032-m03) DBG | About to run SSH command:
	I1210 00:08:20.213584   97943 main.go:141] libmachine: (ha-070032-m03) DBG | exit 0
	I1210 00:08:20.342480   97943 main.go:141] libmachine: (ha-070032-m03) DBG | SSH cmd err, output: <nil>: 
	I1210 00:08:20.342791   97943 main.go:141] libmachine: (ha-070032-m03) KVM machine creation complete!
	I1210 00:08:20.343090   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetConfigRaw
	I1210 00:08:20.343678   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:20.343881   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:20.344092   97943 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1210 00:08:20.344125   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetState
	I1210 00:08:20.345413   97943 main.go:141] libmachine: Detecting operating system of created instance...
	I1210 00:08:20.345430   97943 main.go:141] libmachine: Waiting for SSH to be available...
	I1210 00:08:20.345437   97943 main.go:141] libmachine: Getting to WaitForSSH function...
	I1210 00:08:20.345450   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:20.347967   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.348355   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.348389   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.348481   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:20.348653   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.348776   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.348911   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:20.349041   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:08:20.349329   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1210 00:08:20.349348   97943 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1210 00:08:20.449562   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:08:20.449588   97943 main.go:141] libmachine: Detecting the provisioner...
	I1210 00:08:20.449598   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:20.452398   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.452785   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.452812   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.452941   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:20.453110   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.453240   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.453428   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:20.453598   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:08:20.453780   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1210 00:08:20.453798   97943 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1210 00:08:20.555272   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1210 00:08:20.555337   97943 main.go:141] libmachine: found compatible host: buildroot
	I1210 00:08:20.555348   97943 main.go:141] libmachine: Provisioning with buildroot...
	I1210 00:08:20.555362   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetMachineName
	I1210 00:08:20.555624   97943 buildroot.go:166] provisioning hostname "ha-070032-m03"
	I1210 00:08:20.555652   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetMachineName
	I1210 00:08:20.555844   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:20.558784   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.559157   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.559192   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.559357   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:20.559555   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.559716   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.559850   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:20.560050   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:08:20.560266   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1210 00:08:20.560285   97943 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-070032-m03 && echo "ha-070032-m03" | sudo tee /etc/hostname
	I1210 00:08:20.676771   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-070032-m03
	
	I1210 00:08:20.676807   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:20.679443   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.679776   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.679807   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.680006   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:20.680185   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.680359   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.680491   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:20.680620   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:08:20.680832   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1210 00:08:20.680847   97943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-070032-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-070032-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-070032-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 00:08:20.791291   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:08:20.791325   97943 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 00:08:20.791341   97943 buildroot.go:174] setting up certificates
	I1210 00:08:20.791358   97943 provision.go:84] configureAuth start
	I1210 00:08:20.791370   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetMachineName
	I1210 00:08:20.791652   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetIP
	I1210 00:08:20.794419   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.794874   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.794902   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.795002   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:20.798177   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.798590   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.798619   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.798789   97943 provision.go:143] copyHostCerts
	I1210 00:08:20.798825   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:08:20.798862   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 00:08:20.798871   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:08:20.798934   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 00:08:20.799007   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:08:20.799025   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 00:08:20.799030   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:08:20.799053   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 00:08:20.799097   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:08:20.799112   97943 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 00:08:20.799119   97943 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:08:20.799140   97943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 00:08:20.799198   97943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.ha-070032-m03 san=[127.0.0.1 192.168.39.244 ha-070032-m03 localhost minikube]
	I1210 00:08:20.901770   97943 provision.go:177] copyRemoteCerts
	I1210 00:08:20.901829   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 00:08:20.901857   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:20.904479   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.904810   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:20.904842   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:20.904999   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:20.905202   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:20.905341   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:20.905465   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa Username:docker}
	I1210 00:08:20.987981   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 00:08:20.988061   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 00:08:21.011122   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 00:08:21.011186   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1210 00:08:21.033692   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 00:08:21.033754   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 00:08:21.056597   97943 provision.go:87] duration metric: took 265.223032ms to configureAuth
	I1210 00:08:21.056629   97943 buildroot.go:189] setting minikube options for container-runtime
	I1210 00:08:21.057591   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:08:21.057673   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:21.060831   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.061343   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.061378   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.061673   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:21.061904   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.062107   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.062269   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:21.062474   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:08:21.062700   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1210 00:08:21.062721   97943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 00:08:21.281273   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 00:08:21.281301   97943 main.go:141] libmachine: Checking connection to Docker...
	I1210 00:08:21.281310   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetURL
	I1210 00:08:21.282833   97943 main.go:141] libmachine: (ha-070032-m03) DBG | Using libvirt version 6000000
	I1210 00:08:21.285219   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.285581   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.285613   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.285747   97943 main.go:141] libmachine: Docker is up and running!
	I1210 00:08:21.285761   97943 main.go:141] libmachine: Reticulating splines...
	I1210 00:08:21.285769   97943 client.go:171] duration metric: took 25.801757929s to LocalClient.Create
	I1210 00:08:21.285791   97943 start.go:167] duration metric: took 25.801831678s to libmachine.API.Create "ha-070032"
	I1210 00:08:21.285798   97943 start.go:293] postStartSetup for "ha-070032-m03" (driver="kvm2")
	I1210 00:08:21.285807   97943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 00:08:21.285828   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:21.286085   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 00:08:21.286117   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:21.288055   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.288329   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.288370   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.288480   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:21.288647   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.288777   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:21.288901   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa Username:docker}
	I1210 00:08:21.369391   97943 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 00:08:21.373285   97943 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 00:08:21.373310   97943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 00:08:21.373392   97943 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 00:08:21.373503   97943 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 00:08:21.373518   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /etc/ssl/certs/862962.pem
	I1210 00:08:21.373639   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 00:08:21.382298   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:08:21.403806   97943 start.go:296] duration metric: took 117.996202ms for postStartSetup
	I1210 00:08:21.403863   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetConfigRaw
	I1210 00:08:21.404476   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetIP
	I1210 00:08:21.407162   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.407495   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.407517   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.407796   97943 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:08:21.408029   97943 start.go:128] duration metric: took 25.944309943s to createHost
	I1210 00:08:21.408053   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:21.410158   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.410458   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.410486   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.410661   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:21.410839   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.411023   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.411142   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:21.411301   97943 main.go:141] libmachine: Using SSH client type: native
	I1210 00:08:21.411462   97943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I1210 00:08:21.411473   97943 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 00:08:21.514926   97943 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733789301.493981402
	
	I1210 00:08:21.514949   97943 fix.go:216] guest clock: 1733789301.493981402
	I1210 00:08:21.514956   97943 fix.go:229] Guest: 2024-12-10 00:08:21.493981402 +0000 UTC Remote: 2024-12-10 00:08:21.408042688 +0000 UTC m=+148.654123328 (delta=85.938714ms)
	I1210 00:08:21.514972   97943 fix.go:200] guest clock delta is within tolerance: 85.938714ms
	I1210 00:08:21.514978   97943 start.go:83] releasing machines lock for "ha-070032-m03", held for 26.05137115s
	I1210 00:08:21.514997   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:21.515241   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetIP
	I1210 00:08:21.517912   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.518241   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.518261   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.520470   97943 out.go:177] * Found network options:
	I1210 00:08:21.521800   97943 out.go:177]   - NO_PROXY=192.168.39.187,192.168.39.198
	W1210 00:08:21.523143   97943 proxy.go:119] fail to check proxy env: Error ip not in block
	W1210 00:08:21.523168   97943 proxy.go:119] fail to check proxy env: Error ip not in block
	I1210 00:08:21.523188   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:21.523682   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:21.523924   97943 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:08:21.524029   97943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 00:08:21.524084   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	W1210 00:08:21.524110   97943 proxy.go:119] fail to check proxy env: Error ip not in block
	W1210 00:08:21.524137   97943 proxy.go:119] fail to check proxy env: Error ip not in block
	I1210 00:08:21.524228   97943 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 00:08:21.524251   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:08:21.527134   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.527403   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.527435   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.527461   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.527644   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:21.527864   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:21.527884   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.527885   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:21.528014   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:08:21.528094   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:21.528182   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:08:21.528256   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa Username:docker}
	I1210 00:08:21.528295   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:08:21.528396   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa Username:docker}
	I1210 00:08:21.759543   97943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 00:08:21.765842   97943 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 00:08:21.765945   97943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 00:08:21.781497   97943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 00:08:21.781528   97943 start.go:495] detecting cgroup driver to use...
	I1210 00:08:21.781601   97943 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 00:08:21.798260   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 00:08:21.812631   97943 docker.go:217] disabling cri-docker service (if available) ...
	I1210 00:08:21.812703   97943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 00:08:21.826291   97943 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 00:08:21.839819   97943 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 00:08:21.970011   97943 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 00:08:22.106825   97943 docker.go:233] disabling docker service ...
	I1210 00:08:22.106898   97943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 00:08:22.120845   97943 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 00:08:22.133078   97943 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 00:08:22.277754   97943 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 00:08:22.396135   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 00:08:22.410691   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 00:08:22.428016   97943 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 00:08:22.428081   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.437432   97943 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 00:08:22.437492   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.446807   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.457081   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.466785   97943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 00:08:22.476232   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.485876   97943 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.501168   97943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:08:22.511414   97943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 00:08:22.520354   97943 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 00:08:22.520415   97943 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 00:08:22.532412   97943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 00:08:22.541467   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:08:22.650142   97943 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 00:08:22.739814   97943 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 00:08:22.739908   97943 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 00:08:22.744756   97943 start.go:563] Will wait 60s for crictl version
	I1210 00:08:22.744820   97943 ssh_runner.go:195] Run: which crictl
	I1210 00:08:22.748420   97943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:08:22.786505   97943 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 00:08:22.786627   97943 ssh_runner.go:195] Run: crio --version
	I1210 00:08:22.812591   97943 ssh_runner.go:195] Run: crio --version
	I1210 00:08:22.840186   97943 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 00:08:22.841668   97943 out.go:177]   - env NO_PROXY=192.168.39.187
	I1210 00:08:22.842917   97943 out.go:177]   - env NO_PROXY=192.168.39.187,192.168.39.198
	I1210 00:08:22.843965   97943 main.go:141] libmachine: (ha-070032-m03) Calling .GetIP
	I1210 00:08:22.846623   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:22.847074   97943 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:08:22.847104   97943 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:08:22.847299   97943 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 00:08:22.851246   97943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:08:22.863976   97943 mustload.go:65] Loading cluster: ha-070032
	I1210 00:08:22.864213   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:08:22.864497   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:08:22.864537   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:08:22.879688   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33619
	I1210 00:08:22.880163   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:08:22.880674   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:08:22.880695   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:08:22.880999   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:08:22.881201   97943 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:08:22.882501   97943 host.go:66] Checking if "ha-070032" exists ...
	I1210 00:08:22.882829   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:08:22.882872   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:08:22.897175   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38647
	I1210 00:08:22.897634   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:08:22.898146   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:08:22.898164   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:08:22.898482   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:08:22.898668   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:08:22.898817   97943 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032 for IP: 192.168.39.244
	I1210 00:08:22.898832   97943 certs.go:194] generating shared ca certs ...
	I1210 00:08:22.898852   97943 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:08:22.899000   97943 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 00:08:22.899051   97943 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 00:08:22.899064   97943 certs.go:256] generating profile certs ...
	I1210 00:08:22.899170   97943 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key
	I1210 00:08:22.899201   97943 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.293befb8
	I1210 00:08:22.899223   97943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.293befb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.187 192.168.39.198 192.168.39.244 192.168.39.254]
	I1210 00:08:23.092450   97943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.293befb8 ...
	I1210 00:08:23.092478   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.293befb8: {Name:mk366065b18659314ca3f0bba1448963daaf0a11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:08:23.092639   97943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.293befb8 ...
	I1210 00:08:23.092651   97943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.293befb8: {Name:mk5fa66078dcf45a83918146be6cef89d508f259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:08:23.092720   97943 certs.go:381] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.293befb8 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt
	I1210 00:08:23.092839   97943 certs.go:385] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.293befb8 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key
	I1210 00:08:23.092959   97943 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key
	I1210 00:08:23.092977   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 00:08:23.092992   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 00:08:23.093006   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 00:08:23.093017   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 00:08:23.093029   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 00:08:23.093041   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 00:08:23.093053   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 00:08:23.106669   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 00:08:23.106767   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 00:08:23.106812   97943 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 00:08:23.106826   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:08:23.106858   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 00:08:23.106887   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:08:23.106916   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 00:08:23.107014   97943 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:08:23.107059   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:08:23.107078   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem -> /usr/share/ca-certificates/86296.pem
	I1210 00:08:23.107095   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /usr/share/ca-certificates/862962.pem
	I1210 00:08:23.107140   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:08:23.110428   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:08:23.110865   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:08:23.110897   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:08:23.111098   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:08:23.111299   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:08:23.111497   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:08:23.111654   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:08:23.182834   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1210 00:08:23.187460   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1210 00:08:23.201682   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1210 00:08:23.206212   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1210 00:08:23.216977   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1210 00:08:23.221040   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1210 00:08:23.231771   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1210 00:08:23.235936   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1210 00:08:23.245237   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1210 00:08:23.249225   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1210 00:08:23.259163   97943 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1210 00:08:23.262970   97943 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1210 00:08:23.272905   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:08:23.296036   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 00:08:23.319479   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:08:23.343697   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:08:23.365055   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1210 00:08:23.386745   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 00:08:23.408376   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:08:23.431761   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:08:23.453442   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:08:23.474461   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 00:08:23.496103   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 00:08:23.518047   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1210 00:08:23.533023   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1210 00:08:23.547698   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1210 00:08:23.563066   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1210 00:08:23.577579   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1210 00:08:23.592182   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1210 00:08:23.608125   97943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1210 00:08:23.627416   97943 ssh_runner.go:195] Run: openssl version
	I1210 00:08:23.632821   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:08:23.642458   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:08:23.646845   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:08:23.646909   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:08:23.652298   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:08:23.662442   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 00:08:23.672292   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 00:08:23.676158   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 00:08:23.676205   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 00:08:23.681586   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 00:08:23.691472   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 00:08:23.701487   97943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 00:08:23.705375   97943 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 00:08:23.705413   97943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 00:08:23.710443   97943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:08:23.720294   97943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:08:23.723799   97943 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 00:08:23.723848   97943 kubeadm.go:934] updating node {m03 192.168.39.244 8443 v1.31.2 crio true true} ...
	I1210 00:08:23.723926   97943 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-070032-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:08:23.723949   97943 kube-vip.go:115] generating kube-vip config ...
	I1210 00:08:23.723977   97943 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1210 00:08:23.738685   97943 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1210 00:08:23.738750   97943 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1210 00:08:23.738796   97943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:08:23.747698   97943 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1210 00:08:23.747755   97943 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1210 00:08:23.756795   97943 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1210 00:08:23.756827   97943 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1210 00:08:23.756846   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:08:23.756856   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1210 00:08:23.756795   97943 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1210 00:08:23.756914   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1210 00:08:23.756945   97943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1210 00:08:23.756968   97943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1210 00:08:23.773755   97943 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1210 00:08:23.773816   97943 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1210 00:08:23.773823   97943 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1210 00:08:23.773844   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1210 00:08:23.773877   97943 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1210 00:08:23.773844   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1210 00:08:23.793177   97943 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1210 00:08:23.793213   97943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1210 00:08:24.557518   97943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1210 00:08:24.566776   97943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1210 00:08:24.582142   97943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:08:24.597144   97943 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1210 00:08:24.611549   97943 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1210 00:08:24.615055   97943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:08:24.625780   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:08:24.763770   97943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:08:24.783613   97943 host.go:66] Checking if "ha-070032" exists ...
	I1210 00:08:24.784058   97943 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:08:24.784117   97943 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:08:24.799970   97943 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40537
	I1210 00:08:24.800574   97943 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:08:24.801077   97943 main.go:141] libmachine: Using API Version  1
	I1210 00:08:24.801104   97943 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:08:24.801443   97943 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:08:24.801614   97943 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:08:24.801763   97943 start.go:317] joinCluster: &{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:08:24.801913   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1210 00:08:24.801952   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:08:24.804893   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:08:24.805288   97943 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:08:24.805318   97943 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:08:24.805470   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:08:24.805660   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:08:24.805792   97943 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:08:24.805938   97943 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:08:24.954369   97943 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:08:24.954415   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7o473f.weadhysgevqpchg6 --discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-070032-m03 --control-plane --apiserver-advertise-address=192.168.39.244 --apiserver-bind-port=8443"
	I1210 00:08:45.926879   97943 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7o473f.weadhysgevqpchg6 --discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-070032-m03 --control-plane --apiserver-advertise-address=192.168.39.244 --apiserver-bind-port=8443": (20.972431626s)
	I1210 00:08:45.926930   97943 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1210 00:08:46.537890   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-070032-m03 minikube.k8s.io/updated_at=2024_12_10T00_08_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=ha-070032 minikube.k8s.io/primary=false
	I1210 00:08:46.678755   97943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-070032-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1210 00:08:46.787657   97943 start.go:319] duration metric: took 21.985888121s to joinCluster
	I1210 00:08:46.787759   97943 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:08:46.788166   97943 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:08:46.789343   97943 out.go:177] * Verifying Kubernetes components...
	I1210 00:08:46.790511   97943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:08:47.024805   97943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:08:47.076330   97943 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:08:47.076598   97943 kapi.go:59] client config for ha-070032: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.crt", KeyFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key", CAFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1210 00:08:47.076672   97943 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.187:8443
	I1210 00:08:47.076938   97943 node_ready.go:35] waiting up to 6m0s for node "ha-070032-m03" to be "Ready" ...
	I1210 00:08:47.077046   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:47.077058   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:47.077068   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:47.077072   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:47.081152   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:08:47.577919   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:47.577942   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:47.577950   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:47.577954   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:47.581367   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:48.077920   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:48.077946   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:48.077954   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:48.077957   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:48.081478   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:48.578106   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:48.578131   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:48.578140   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:48.578145   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:48.581394   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:49.077995   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:49.078020   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:49.078028   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:49.078032   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:49.081191   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:49.081654   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:08:49.577520   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:49.577543   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:49.577568   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:49.577572   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:49.580973   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:50.077456   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:50.077483   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:50.077492   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:50.077497   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:50.083402   97943 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1210 00:08:50.577976   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:50.577999   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:50.578007   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:50.578010   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:50.580506   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:08:51.077330   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:51.077376   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:51.077386   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:51.077395   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:51.080649   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:51.577290   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:51.577326   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:51.577339   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:51.577349   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:51.580882   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:51.581750   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:08:52.077653   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:52.077675   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:52.077683   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:52.077687   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:52.080889   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:52.578159   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:52.578187   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:52.578198   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:52.578206   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:52.582757   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:08:53.078153   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:53.078177   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:53.078185   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:53.078189   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:53.081439   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:53.577299   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:53.577324   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:53.577333   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:53.577338   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:53.580510   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:54.077196   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:54.077220   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:54.077230   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:54.077236   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:54.083654   97943 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1210 00:08:54.084273   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:08:54.578076   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:54.578111   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:54.578119   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:54.578123   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:54.581723   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:55.077626   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:55.077648   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:55.077657   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:55.077660   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:55.081300   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:55.577841   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:55.577867   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:55.577877   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:55.577886   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:55.581081   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:56.078005   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:56.078027   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:56.078036   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:56.078039   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:56.081200   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:56.577743   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:56.577839   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:56.577862   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:56.577877   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:56.582190   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:08:56.583066   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:08:57.077440   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:57.077464   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:57.077472   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:57.077477   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:57.080605   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:57.577457   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:57.577484   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:57.577493   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:57.577503   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:57.580830   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:58.077293   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:58.077331   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:58.077344   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:58.077352   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:58.080511   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:58.577256   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:58.577282   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:58.577294   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:58.577299   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:58.580528   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:59.077895   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:59.077918   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:59.077926   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:59.077932   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:59.080996   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:08:59.081515   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:08:59.577418   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:08:59.577442   97943 round_trippers.go:469] Request Headers:
	I1210 00:08:59.577450   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:08:59.577454   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:08:59.580861   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:00.077126   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:00.077149   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:00.077160   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:00.077166   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:00.080369   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:00.577334   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:00.577358   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:00.577369   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:00.577376   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:00.580424   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:01.077338   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:01.077364   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:01.077371   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:01.077375   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:01.080475   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:01.577333   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:01.577358   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:01.577371   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:01.577378   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:01.581002   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:01.581675   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:09:02.078158   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:02.078188   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:02.078197   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:02.078202   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:02.081520   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:02.577513   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:02.577534   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:02.577542   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:02.577548   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:02.580750   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:03.077225   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:03.077249   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:03.077258   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:03.077262   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:03.080188   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:03.577192   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:03.577225   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:03.577233   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:03.577238   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:03.579962   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:04.078167   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:04.078198   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:04.078207   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:04.078211   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:04.081272   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:04.081781   97943 node_ready.go:53] node "ha-070032-m03" has status "Ready":"False"
	I1210 00:09:04.577794   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:04.577818   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:04.577826   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:04.577833   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:04.580810   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.077153   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:05.077175   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.077183   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.077189   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.080235   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:05.577566   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:05.577589   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.577597   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.577601   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.580616   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.581339   97943 node_ready.go:49] node "ha-070032-m03" has status "Ready":"True"
	I1210 00:09:05.581357   97943 node_ready.go:38] duration metric: took 18.504395192s for node "ha-070032-m03" to be "Ready" ...
	I1210 00:09:05.581372   97943 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:09:05.581447   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:09:05.581458   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.581465   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.581469   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.589597   97943 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1210 00:09:05.596462   97943 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fs6l6" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.596536   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs6l6
	I1210 00:09:05.596544   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.596551   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.596556   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.599226   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.599844   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:05.599860   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.599867   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.599871   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.602025   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.602633   97943 pod_ready.go:93] pod "coredns-7c65d6cfc9-fs6l6" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:05.602657   97943 pod_ready.go:82] duration metric: took 6.171823ms for pod "coredns-7c65d6cfc9-fs6l6" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.602669   97943 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nqnhw" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.602734   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-nqnhw
	I1210 00:09:05.602745   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.602755   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.602759   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.605440   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.606129   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:05.606147   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.606157   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.606166   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.608461   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.608910   97943 pod_ready.go:93] pod "coredns-7c65d6cfc9-nqnhw" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:05.608928   97943 pod_ready.go:82] duration metric: took 6.250217ms for pod "coredns-7c65d6cfc9-nqnhw" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.608941   97943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.608999   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/etcd-ha-070032
	I1210 00:09:05.609009   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.609019   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.609029   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.611004   97943 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1210 00:09:05.611561   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:05.611577   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.611587   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.611591   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.613769   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.614248   97943 pod_ready.go:93] pod "etcd-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:05.614265   97943 pod_ready.go:82] duration metric: took 5.312355ms for pod "etcd-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.614275   97943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.614330   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/etcd-ha-070032-m02
	I1210 00:09:05.614341   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.614352   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.614362   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.616534   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:05.617151   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:05.617169   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.617188   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.617196   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.619058   97943 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1210 00:09:05.619439   97943 pod_ready.go:93] pod "etcd-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:05.619455   97943 pod_ready.go:82] duration metric: took 5.173011ms for pod "etcd-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.619463   97943 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.777761   97943 request.go:632] Waited for 158.225465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/etcd-ha-070032-m03
	I1210 00:09:05.777859   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/etcd-ha-070032-m03
	I1210 00:09:05.777871   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.777881   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.777892   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.780968   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:05.978102   97943 request.go:632] Waited for 196.392006ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:05.978169   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:05.978176   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:05.978187   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:05.978209   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:05.981545   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:05.981978   97943 pod_ready.go:93] pod "etcd-ha-070032-m03" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:05.981997   97943 pod_ready.go:82] duration metric: took 362.528097ms for pod "etcd-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:05.982014   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:06.178303   97943 request.go:632] Waited for 196.186487ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032
	I1210 00:09:06.178366   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032
	I1210 00:09:06.178371   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:06.178384   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:06.178391   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:06.181153   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:06.378297   97943 request.go:632] Waited for 196.356871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:06.378357   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:06.378363   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:06.378371   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:06.378375   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:06.381593   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:06.382165   97943 pod_ready.go:93] pod "kube-apiserver-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:06.382184   97943 pod_ready.go:82] duration metric: took 400.160632ms for pod "kube-apiserver-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:06.382194   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:06.578291   97943 request.go:632] Waited for 195.993966ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032-m02
	I1210 00:09:06.578353   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032-m02
	I1210 00:09:06.578358   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:06.578366   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:06.578370   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:06.582418   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:09:06.777593   97943 request.go:632] Waited for 194.199077ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:06.777669   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:06.777674   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:06.777681   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:06.777686   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:06.780997   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:06.781681   97943 pod_ready.go:93] pod "kube-apiserver-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:06.781703   97943 pod_ready.go:82] duration metric: took 399.498231ms for pod "kube-apiserver-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:06.781713   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:06.977670   97943 request.go:632] Waited for 195.882184ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032-m03
	I1210 00:09:06.977738   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-070032-m03
	I1210 00:09:06.977758   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:06.977770   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:06.977778   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:06.981052   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:07.178250   97943 request.go:632] Waited for 196.370885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:07.178313   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:07.178319   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:07.178329   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:07.178338   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:07.182730   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:09:07.183284   97943 pod_ready.go:93] pod "kube-apiserver-ha-070032-m03" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:07.183306   97943 pod_ready.go:82] duration metric: took 401.586259ms for pod "kube-apiserver-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:07.183318   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:07.378237   97943 request.go:632] Waited for 194.824127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032
	I1210 00:09:07.378316   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032
	I1210 00:09:07.378322   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:07.378330   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:07.378333   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:07.382039   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:07.578085   97943 request.go:632] Waited for 195.402263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:07.578148   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:07.578154   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:07.578162   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:07.578166   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:07.581490   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:07.582147   97943 pod_ready.go:93] pod "kube-controller-manager-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:07.582169   97943 pod_ready.go:82] duration metric: took 398.840074ms for pod "kube-controller-manager-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:07.582184   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:07.778287   97943 request.go:632] Waited for 195.989005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032-m02
	I1210 00:09:07.778362   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032-m02
	I1210 00:09:07.778374   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:07.778386   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:07.778396   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:07.781669   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:07.978394   97943 request.go:632] Waited for 195.912192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:07.978479   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:07.978484   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:07.978492   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:07.978496   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:07.981759   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:07.982200   97943 pod_ready.go:93] pod "kube-controller-manager-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:07.982218   97943 pod_ready.go:82] duration metric: took 400.02698ms for pod "kube-controller-manager-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:07.982230   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:08.178354   97943 request.go:632] Waited for 196.04264ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032-m03
	I1210 00:09:08.178439   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-070032-m03
	I1210 00:09:08.178449   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:08.178457   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:08.178466   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:08.181631   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:08.378597   97943 request.go:632] Waited for 196.366344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:08.378673   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:08.378683   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:08.378697   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:08.378707   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:08.384450   97943 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1210 00:09:08.385049   97943 pod_ready.go:93] pod "kube-controller-manager-ha-070032-m03" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:08.385078   97943 pod_ready.go:82] duration metric: took 402.840862ms for pod "kube-controller-manager-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:08.385096   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7fm88" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:08.577999   97943 request.go:632] Waited for 192.799851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7fm88
	I1210 00:09:08.578083   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7fm88
	I1210 00:09:08.578091   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:08.578100   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:08.578112   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:08.581292   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:08.777999   97943 request.go:632] Waited for 196.009017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:08.778080   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:08.778085   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:08.778093   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:08.778098   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:08.781007   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:08.781565   97943 pod_ready.go:93] pod "kube-proxy-7fm88" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:08.781586   97943 pod_ready.go:82] duration metric: took 396.482834ms for pod "kube-proxy-7fm88" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:08.781597   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bhnsm" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:08.978485   97943 request.go:632] Waited for 196.79193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bhnsm
	I1210 00:09:08.978550   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bhnsm
	I1210 00:09:08.978555   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:08.978577   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:08.978584   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:08.981555   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:09.178372   97943 request.go:632] Waited for 196.176512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:09.178445   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:09.178450   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:09.178457   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:09.178462   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:09.180718   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:09.181230   97943 pod_ready.go:93] pod "kube-proxy-bhnsm" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:09.181253   97943 pod_ready.go:82] duration metric: took 399.648229ms for pod "kube-proxy-bhnsm" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:09.181267   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xsxdp" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:09.378388   97943 request.go:632] Waited for 197.025674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsxdp
	I1210 00:09:09.378477   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsxdp
	I1210 00:09:09.378488   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:09.378497   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:09.378503   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:09.381425   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:09.578360   97943 request.go:632] Waited for 196.219183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:09.578421   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:09.578427   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:09.578435   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:09.578443   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:09.581280   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:09.581905   97943 pod_ready.go:93] pod "kube-proxy-xsxdp" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:09.581924   97943 pod_ready.go:82] duration metric: took 400.650321ms for pod "kube-proxy-xsxdp" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:09.581937   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:09.778061   97943 request.go:632] Waited for 196.052401ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032
	I1210 00:09:09.778128   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032
	I1210 00:09:09.778147   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:09.778155   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:09.778159   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:09.781448   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:09.978364   97943 request.go:632] Waited for 196.322768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:09.978428   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032
	I1210 00:09:09.978432   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:09.978441   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:09.978451   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:09.981730   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:09.982286   97943 pod_ready.go:93] pod "kube-scheduler-ha-070032" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:09.982308   97943 pod_ready.go:82] duration metric: took 400.362948ms for pod "kube-scheduler-ha-070032" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:09.982322   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:10.178076   97943 request.go:632] Waited for 195.65251ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032-m02
	I1210 00:09:10.178166   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032-m02
	I1210 00:09:10.178177   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:10.178190   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:10.178199   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:10.180876   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:10.377670   97943 request.go:632] Waited for 196.175118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:10.377736   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m02
	I1210 00:09:10.377741   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:10.377751   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:10.377756   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:10.380801   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:10.381686   97943 pod_ready.go:93] pod "kube-scheduler-ha-070032-m02" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:10.381707   97943 pod_ready.go:82] duration metric: took 399.375185ms for pod "kube-scheduler-ha-070032-m02" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:10.381723   97943 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:10.578151   97943 request.go:632] Waited for 196.332176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032-m03
	I1210 00:09:10.578230   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-070032-m03
	I1210 00:09:10.578239   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:10.578251   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:10.578259   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:10.581336   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:10.778384   97943 request.go:632] Waited for 196.388806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:10.778498   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes/ha-070032-m03
	I1210 00:09:10.778512   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:10.778524   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:10.778534   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:10.781555   97943 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1210 00:09:10.782190   97943 pod_ready.go:93] pod "kube-scheduler-ha-070032-m03" in "kube-system" namespace has status "Ready":"True"
	I1210 00:09:10.782213   97943 pod_ready.go:82] duration metric: took 400.482867ms for pod "kube-scheduler-ha-070032-m03" in "kube-system" namespace to be "Ready" ...
	I1210 00:09:10.782226   97943 pod_ready.go:39] duration metric: took 5.200841149s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:09:10.782243   97943 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:09:10.782306   97943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:09:10.798221   97943 api_server.go:72] duration metric: took 24.010410964s to wait for apiserver process to appear ...
	I1210 00:09:10.798252   97943 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:09:10.798277   97943 api_server.go:253] Checking apiserver healthz at https://192.168.39.187:8443/healthz ...
	I1210 00:09:10.802683   97943 api_server.go:279] https://192.168.39.187:8443/healthz returned 200:
	ok
	I1210 00:09:10.802763   97943 round_trippers.go:463] GET https://192.168.39.187:8443/version
	I1210 00:09:10.802775   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:10.802786   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:10.802791   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:10.803637   97943 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1210 00:09:10.803715   97943 api_server.go:141] control plane version: v1.31.2
	I1210 00:09:10.803733   97943 api_server.go:131] duration metric: took 5.473282ms to wait for apiserver health ...
	I1210 00:09:10.803747   97943 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:09:10.978074   97943 request.go:632] Waited for 174.240033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:09:10.978174   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:09:10.978188   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:10.978200   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:10.978210   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:10.984458   97943 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1210 00:09:10.990989   97943 system_pods.go:59] 24 kube-system pods found
	I1210 00:09:10.991013   97943 system_pods.go:61] "coredns-7c65d6cfc9-fs6l6" [9a1cf8b4-0a76-41a1-930f-1574e31db324] Running
	I1210 00:09:10.991018   97943 system_pods.go:61] "coredns-7c65d6cfc9-nqnhw" [2c81e85b-ea31-43b6-9467-97ff40b0b4a0] Running
	I1210 00:09:10.991022   97943 system_pods.go:61] "etcd-ha-070032" [db08368f-b4a3-4d4b-8863-3a2bef1f832b] Running
	I1210 00:09:10.991026   97943 system_pods.go:61] "etcd-ha-070032-m02" [22d5e0f8-0395-40fe-b734-1718a89c251a] Running
	I1210 00:09:10.991029   97943 system_pods.go:61] "etcd-ha-070032-m03" [ab936be4-5488-4dfc-a02a-d503eaf3ea02] Running
	I1210 00:09:10.991032   97943 system_pods.go:61] "kindnet-69btk" [23838518-3372-48bc-986e-e4688e0963bb] Running
	I1210 00:09:10.991034   97943 system_pods.go:61] "kindnet-gbrrg" [fe384e2f-f251-49d1-9b90-e73cddcd45e1] Running
	I1210 00:09:10.991037   97943 system_pods.go:61] "kindnet-r97q9" [566672d9-989b-4337-84f4-bafd5c70755f] Running
	I1210 00:09:10.991041   97943 system_pods.go:61] "kube-apiserver-ha-070032" [e06e8916-31d6-4690-bc97-ed55126af827] Running
	I1210 00:09:10.991044   97943 system_pods.go:61] "kube-apiserver-ha-070032-m02" [b2c543a7-2d54-44a4-9369-3115629d79bb] Running
	I1210 00:09:10.991047   97943 system_pods.go:61] "kube-apiserver-ha-070032-m03" [7d78ed28-bd45-49a7-bdd8-85d011048605] Running
	I1210 00:09:10.991050   97943 system_pods.go:61] "kube-controller-manager-ha-070032" [d01a145f-816f-41ce-83db-c81713564109] Running
	I1210 00:09:10.991054   97943 system_pods.go:61] "kube-controller-manager-ha-070032-m02" [d8af77ba-8496-4383-a8a8-387110a2a026] Running
	I1210 00:09:10.991057   97943 system_pods.go:61] "kube-controller-manager-ha-070032-m03" [f9860096-95b3-4911-b95f-22a2080afd02] Running
	I1210 00:09:10.991060   97943 system_pods.go:61] "kube-proxy-7fm88" [e935cde6-5a4b-4387-93a9-26ca701c54ac] Running
	I1210 00:09:10.991064   97943 system_pods.go:61] "kube-proxy-bhnsm" [b886bbdb-e0b7-4cb8-8e71-4b9d23993178] Running
	I1210 00:09:10.991068   97943 system_pods.go:61] "kube-proxy-xsxdp" [9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5] Running
	I1210 00:09:10.991074   97943 system_pods.go:61] "kube-scheduler-ha-070032" [28986558-f062-4f8e-9fb1-413a22311d7d] Running
	I1210 00:09:10.991078   97943 system_pods.go:61] "kube-scheduler-ha-070032-m02" [0600e010-0e55-44d9-ac1f-1e62c0139dd1] Running
	I1210 00:09:10.991081   97943 system_pods.go:61] "kube-scheduler-ha-070032-m03" [3b8eede7-a587-4561-9d46-ca58b43d7ebe] Running
	I1210 00:09:10.991084   97943 system_pods.go:61] "kube-vip-ha-070032" [dcecf230-f635-42a1-8fd4-567e3409d086] Running
	I1210 00:09:10.991087   97943 system_pods.go:61] "kube-vip-ha-070032-m02" [af656c29-7151-4ef7-8fa6-91187358671e] Running
	I1210 00:09:10.991090   97943 system_pods.go:61] "kube-vip-ha-070032-m03" [db7c389f-4b41-4fee-a43d-e89ef1455a1d] Running
	I1210 00:09:10.991095   97943 system_pods.go:61] "storage-provisioner" [920be345-6e4a-425f-bcde-0e5463fd92b7] Running
	I1210 00:09:10.991101   97943 system_pods.go:74] duration metric: took 187.346055ms to wait for pod list to return data ...
	I1210 00:09:10.991110   97943 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:09:11.178582   97943 request.go:632] Waited for 187.368121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/default/serviceaccounts
	I1210 00:09:11.178661   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/default/serviceaccounts
	I1210 00:09:11.178670   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:11.178681   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:11.178692   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:11.181792   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:11.181919   97943 default_sa.go:45] found service account: "default"
	I1210 00:09:11.181932   97943 default_sa.go:55] duration metric: took 190.816109ms for default service account to be created ...
	I1210 00:09:11.181940   97943 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:09:11.378264   97943 request.go:632] Waited for 196.227358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:09:11.378336   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/namespaces/kube-system/pods
	I1210 00:09:11.378344   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:11.378355   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:11.378365   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:11.383056   97943 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1210 00:09:11.390160   97943 system_pods.go:86] 24 kube-system pods found
	I1210 00:09:11.390190   97943 system_pods.go:89] "coredns-7c65d6cfc9-fs6l6" [9a1cf8b4-0a76-41a1-930f-1574e31db324] Running
	I1210 00:09:11.390197   97943 system_pods.go:89] "coredns-7c65d6cfc9-nqnhw" [2c81e85b-ea31-43b6-9467-97ff40b0b4a0] Running
	I1210 00:09:11.390201   97943 system_pods.go:89] "etcd-ha-070032" [db08368f-b4a3-4d4b-8863-3a2bef1f832b] Running
	I1210 00:09:11.390207   97943 system_pods.go:89] "etcd-ha-070032-m02" [22d5e0f8-0395-40fe-b734-1718a89c251a] Running
	I1210 00:09:11.390211   97943 system_pods.go:89] "etcd-ha-070032-m03" [ab936be4-5488-4dfc-a02a-d503eaf3ea02] Running
	I1210 00:09:11.390215   97943 system_pods.go:89] "kindnet-69btk" [23838518-3372-48bc-986e-e4688e0963bb] Running
	I1210 00:09:11.390219   97943 system_pods.go:89] "kindnet-gbrrg" [fe384e2f-f251-49d1-9b90-e73cddcd45e1] Running
	I1210 00:09:11.390223   97943 system_pods.go:89] "kindnet-r97q9" [566672d9-989b-4337-84f4-bafd5c70755f] Running
	I1210 00:09:11.390227   97943 system_pods.go:89] "kube-apiserver-ha-070032" [e06e8916-31d6-4690-bc97-ed55126af827] Running
	I1210 00:09:11.390231   97943 system_pods.go:89] "kube-apiserver-ha-070032-m02" [b2c543a7-2d54-44a4-9369-3115629d79bb] Running
	I1210 00:09:11.390238   97943 system_pods.go:89] "kube-apiserver-ha-070032-m03" [7d78ed28-bd45-49a7-bdd8-85d011048605] Running
	I1210 00:09:11.390243   97943 system_pods.go:89] "kube-controller-manager-ha-070032" [d01a145f-816f-41ce-83db-c81713564109] Running
	I1210 00:09:11.390247   97943 system_pods.go:89] "kube-controller-manager-ha-070032-m02" [d8af77ba-8496-4383-a8a8-387110a2a026] Running
	I1210 00:09:11.390251   97943 system_pods.go:89] "kube-controller-manager-ha-070032-m03" [f9860096-95b3-4911-b95f-22a2080afd02] Running
	I1210 00:09:11.390256   97943 system_pods.go:89] "kube-proxy-7fm88" [e935cde6-5a4b-4387-93a9-26ca701c54ac] Running
	I1210 00:09:11.390259   97943 system_pods.go:89] "kube-proxy-bhnsm" [b886bbdb-e0b7-4cb8-8e71-4b9d23993178] Running
	I1210 00:09:11.390263   97943 system_pods.go:89] "kube-proxy-xsxdp" [9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5] Running
	I1210 00:09:11.390266   97943 system_pods.go:89] "kube-scheduler-ha-070032" [28986558-f062-4f8e-9fb1-413a22311d7d] Running
	I1210 00:09:11.390273   97943 system_pods.go:89] "kube-scheduler-ha-070032-m02" [0600e010-0e55-44d9-ac1f-1e62c0139dd1] Running
	I1210 00:09:11.390276   97943 system_pods.go:89] "kube-scheduler-ha-070032-m03" [3b8eede7-a587-4561-9d46-ca58b43d7ebe] Running
	I1210 00:09:11.390280   97943 system_pods.go:89] "kube-vip-ha-070032" [dcecf230-f635-42a1-8fd4-567e3409d086] Running
	I1210 00:09:11.390284   97943 system_pods.go:89] "kube-vip-ha-070032-m02" [af656c29-7151-4ef7-8fa6-91187358671e] Running
	I1210 00:09:11.390287   97943 system_pods.go:89] "kube-vip-ha-070032-m03" [db7c389f-4b41-4fee-a43d-e89ef1455a1d] Running
	I1210 00:09:11.390290   97943 system_pods.go:89] "storage-provisioner" [920be345-6e4a-425f-bcde-0e5463fd92b7] Running
	I1210 00:09:11.390298   97943 system_pods.go:126] duration metric: took 208.352897ms to wait for k8s-apps to be running ...
	I1210 00:09:11.390309   97943 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:09:11.390362   97943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:09:11.405439   97943 system_svc.go:56] duration metric: took 15.123283ms WaitForService to wait for kubelet
	I1210 00:09:11.405468   97943 kubeadm.go:582] duration metric: took 24.617672778s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:09:11.405491   97943 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:09:11.577957   97943 request.go:632] Waited for 172.358102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.187:8443/api/v1/nodes
	I1210 00:09:11.578045   97943 round_trippers.go:463] GET https://192.168.39.187:8443/api/v1/nodes
	I1210 00:09:11.578061   97943 round_trippers.go:469] Request Headers:
	I1210 00:09:11.578081   97943 round_trippers.go:473]     Accept: application/json, */*
	I1210 00:09:11.578091   97943 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1210 00:09:11.582050   97943 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1210 00:09:11.583133   97943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:09:11.583157   97943 node_conditions.go:123] node cpu capacity is 2
	I1210 00:09:11.583185   97943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:09:11.583189   97943 node_conditions.go:123] node cpu capacity is 2
	I1210 00:09:11.583193   97943 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:09:11.583196   97943 node_conditions.go:123] node cpu capacity is 2
	I1210 00:09:11.583201   97943 node_conditions.go:105] duration metric: took 177.705427ms to run NodePressure ...
	I1210 00:09:11.583218   97943 start.go:241] waiting for startup goroutines ...
	I1210 00:09:11.583239   97943 start.go:255] writing updated cluster config ...
	I1210 00:09:11.583593   97943 ssh_runner.go:195] Run: rm -f paused
	I1210 00:09:11.635827   97943 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:09:11.638609   97943 out.go:177] * Done! kubectl is now configured to use "ha-070032" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 00:13:07 ha-070032 crio[662]: time="2024-12-10 00:13:07.181670013Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789587181652176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=286eeab8-4385-4f3b-a485-909d0df7ae66 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:13:07 ha-070032 crio[662]: time="2024-12-10 00:13:07.182254942Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e748281d-9f53-4fe6-94bc-d39a4078e262 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:13:07 ha-070032 crio[662]: time="2024-12-10 00:13:07.182323684Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e748281d-9f53-4fe6-94bc-d39a4078e262 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:13:07 ha-070032 crio[662]: time="2024-12-10 00:13:07.182556721Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c6ab8dccd8ba997291d2b80d419b2b2e05a32373d0202dd8c719263ae30ccdb,PodSandboxId:e3f274c30a3959296a5c030d1ffa934b64c75f95bf0306039097cb7cf68b4fe4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733789355190103686,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d682h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 856c7b29-d84b-4688-8af1-c6cd60e5c948,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e305236942a6a79ef7f91e253be393cd58488d48aba3b5bf66c479acd0067bc8,PodSandboxId:5a85b4a79da52f1c29e7fcfa81fa0bedc1eb6bfd38fb444179c78f575d79d860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215377276438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqnhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e85b-ea31-43b6-9467-97ff40b0b4a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c2e334f3ec55e4be646775958650ae4186637afeca4998288d1fbd38037c8ea,PodSandboxId:f558795052a9d75e6b78b43096de277d42834212f92a6b90269650e35759d438,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215322355166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fs6l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9a1cf8b4-0a76-41a1-930f-1574e31db324,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bc6f0cc193d54e7a8a7dd22a38ca0e4d4f61bb51f322d3f41dac47db13c95b,PodSandboxId:3ad98b3ae6d227d0d652ff69ea4b3d54dd4f3e946d6480b75ce103fe7eaffa18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733789215263534966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920be345-6e4a-425f-bcde-0e5463fd92b7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c87cad753cfce5b05fd5987342e13dea86a36bc44f9c4b3a934dd48d2329af3,PodSandboxId:07cf68f38d235286c1e4b79576c7b2d5fd1fb3fd1b16b9f3f13cfe64eaed0c17,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733789203352008941,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r97q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566672d9-989b-4337-84f4-bafd5c70755f,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ce0ccc8b2285ae9861ca675ddee1c7cc4b2eb95f1fc9b3c252e8b7f70e57e2,PodSandboxId:f6e164f7d5dc23cc23d172647390aa33a976f115b0e620a3f1c8ceb1972b50c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733789199
811986108,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsxdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c832ea7354c3849fb453286649b272a5fcf355d47187efa465c13d6fc4d65dd,PodSandboxId:63415c4eed5c67ae421bad643372f730f2058ddc450abd01192ea02adf7fd1cd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378919154
1880372,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853eed8c10d2af858abe4fbec7a3d503,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ad93591d94d418f536035e0dbd58787d7e5c96ef2645619b7bf1fdd88df33c,PodSandboxId:974a006af9e0d5faa5a19f159a45e227208fee0ebf388fba700952547d1d2529,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733789188777226881,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecd4fc4633d25364aa5747e7113ed8c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cc792ca2c2098e4d3b71b355fb33ce358f1d7de4b2454c236712b6102bdaa06,PodSandboxId:94eb5ad94038f1074001a9da0f0356b4211451a99d8498e75f83f6896f01e753,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789188760652300,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2215b8ddb73d6ba5d32a52569092583d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1482c9caeda45e0518bea419e7de0b9d7ea7016563bc8769f4f33151afc52fca,PodSandboxId:2ae901f42d38831fd48a04c073ff0005559a868da016d574361a8a116dae1424,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733789188774438801,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b3080b6aff5134adaf044003845e8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06c286b00c118c3e50e8da5c902bb87ed0d31b055437ffb3e10bda025f3a64d,PodSandboxId:baf6b5fc008a937d1c54f73e03b660e8eddaec5406534b7f3a1ae0650baf4121,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733789188707109904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727141bf4ae4b442d005a5b0b8b6fdb9,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e748281d-9f53-4fe6-94bc-d39a4078e262 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:13:07 ha-070032 crio[662]: time="2024-12-10 00:13:07.215230114Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3b9555a0-6d58-4301-91b1-89fa27e8a497 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:13:07 ha-070032 crio[662]: time="2024-12-10 00:13:07.215304356Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3b9555a0-6d58-4301-91b1-89fa27e8a497 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:13:07 ha-070032 crio[662]: time="2024-12-10 00:13:07.216370693Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f4484af1-5fbe-4414-b0cb-08e3220c2938 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:13:07 ha-070032 crio[662]: time="2024-12-10 00:13:07.216969055Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789587216945361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f4484af1-5fbe-4414-b0cb-08e3220c2938 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:13:07 ha-070032 crio[662]: time="2024-12-10 00:13:07.217443630Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4840cd1d-e64e-409b-ad7c-c583b007957b name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:13:07 ha-070032 crio[662]: time="2024-12-10 00:13:07.217504244Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4840cd1d-e64e-409b-ad7c-c583b007957b name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:13:07 ha-070032 crio[662]: time="2024-12-10 00:13:07.217763546Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c6ab8dccd8ba997291d2b80d419b2b2e05a32373d0202dd8c719263ae30ccdb,PodSandboxId:e3f274c30a3959296a5c030d1ffa934b64c75f95bf0306039097cb7cf68b4fe4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733789355190103686,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d682h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 856c7b29-d84b-4688-8af1-c6cd60e5c948,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e305236942a6a79ef7f91e253be393cd58488d48aba3b5bf66c479acd0067bc8,PodSandboxId:5a85b4a79da52f1c29e7fcfa81fa0bedc1eb6bfd38fb444179c78f575d79d860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215377276438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqnhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e85b-ea31-43b6-9467-97ff40b0b4a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c2e334f3ec55e4be646775958650ae4186637afeca4998288d1fbd38037c8ea,PodSandboxId:f558795052a9d75e6b78b43096de277d42834212f92a6b90269650e35759d438,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215322355166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fs6l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9a1cf8b4-0a76-41a1-930f-1574e31db324,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bc6f0cc193d54e7a8a7dd22a38ca0e4d4f61bb51f322d3f41dac47db13c95b,PodSandboxId:3ad98b3ae6d227d0d652ff69ea4b3d54dd4f3e946d6480b75ce103fe7eaffa18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733789215263534966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920be345-6e4a-425f-bcde-0e5463fd92b7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c87cad753cfce5b05fd5987342e13dea86a36bc44f9c4b3a934dd48d2329af3,PodSandboxId:07cf68f38d235286c1e4b79576c7b2d5fd1fb3fd1b16b9f3f13cfe64eaed0c17,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733789203352008941,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r97q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566672d9-989b-4337-84f4-bafd5c70755f,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ce0ccc8b2285ae9861ca675ddee1c7cc4b2eb95f1fc9b3c252e8b7f70e57e2,PodSandboxId:f6e164f7d5dc23cc23d172647390aa33a976f115b0e620a3f1c8ceb1972b50c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733789199
811986108,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsxdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c832ea7354c3849fb453286649b272a5fcf355d47187efa465c13d6fc4d65dd,PodSandboxId:63415c4eed5c67ae421bad643372f730f2058ddc450abd01192ea02adf7fd1cd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378919154
1880372,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853eed8c10d2af858abe4fbec7a3d503,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ad93591d94d418f536035e0dbd58787d7e5c96ef2645619b7bf1fdd88df33c,PodSandboxId:974a006af9e0d5faa5a19f159a45e227208fee0ebf388fba700952547d1d2529,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733789188777226881,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecd4fc4633d25364aa5747e7113ed8c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cc792ca2c2098e4d3b71b355fb33ce358f1d7de4b2454c236712b6102bdaa06,PodSandboxId:94eb5ad94038f1074001a9da0f0356b4211451a99d8498e75f83f6896f01e753,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789188760652300,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2215b8ddb73d6ba5d32a52569092583d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1482c9caeda45e0518bea419e7de0b9d7ea7016563bc8769f4f33151afc52fca,PodSandboxId:2ae901f42d38831fd48a04c073ff0005559a868da016d574361a8a116dae1424,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733789188774438801,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b3080b6aff5134adaf044003845e8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06c286b00c118c3e50e8da5c902bb87ed0d31b055437ffb3e10bda025f3a64d,PodSandboxId:baf6b5fc008a937d1c54f73e03b660e8eddaec5406534b7f3a1ae0650baf4121,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733789188707109904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727141bf4ae4b442d005a5b0b8b6fdb9,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4840cd1d-e64e-409b-ad7c-c583b007957b name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:13:07 ha-070032 crio[662]: time="2024-12-10 00:13:07.260163980Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f17b869e-0158-46ca-ba9e-3cd67cc99b2e name=/runtime.v1.RuntimeService/Version
	Dec 10 00:13:07 ha-070032 crio[662]: time="2024-12-10 00:13:07.260246213Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f17b869e-0158-46ca-ba9e-3cd67cc99b2e name=/runtime.v1.RuntimeService/Version
	Dec 10 00:13:07 ha-070032 crio[662]: time="2024-12-10 00:13:07.261230528Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bbbe27a8-6d56-4932-b0fb-6bae7c27af56 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:13:07 ha-070032 crio[662]: time="2024-12-10 00:13:07.261654716Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789587261634708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bbbe27a8-6d56-4932-b0fb-6bae7c27af56 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:13:07 ha-070032 crio[662]: time="2024-12-10 00:13:07.262212000Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4e88895-f6c9-4be7-b40f-233a7b78fa37 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:13:07 ha-070032 crio[662]: time="2024-12-10 00:13:07.262271269Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4e88895-f6c9-4be7-b40f-233a7b78fa37 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:13:07 ha-070032 crio[662]: time="2024-12-10 00:13:07.262562608Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c6ab8dccd8ba997291d2b80d419b2b2e05a32373d0202dd8c719263ae30ccdb,PodSandboxId:e3f274c30a3959296a5c030d1ffa934b64c75f95bf0306039097cb7cf68b4fe4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733789355190103686,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d682h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 856c7b29-d84b-4688-8af1-c6cd60e5c948,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e305236942a6a79ef7f91e253be393cd58488d48aba3b5bf66c479acd0067bc8,PodSandboxId:5a85b4a79da52f1c29e7fcfa81fa0bedc1eb6bfd38fb444179c78f575d79d860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215377276438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqnhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e85b-ea31-43b6-9467-97ff40b0b4a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c2e334f3ec55e4be646775958650ae4186637afeca4998288d1fbd38037c8ea,PodSandboxId:f558795052a9d75e6b78b43096de277d42834212f92a6b90269650e35759d438,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215322355166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fs6l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9a1cf8b4-0a76-41a1-930f-1574e31db324,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bc6f0cc193d54e7a8a7dd22a38ca0e4d4f61bb51f322d3f41dac47db13c95b,PodSandboxId:3ad98b3ae6d227d0d652ff69ea4b3d54dd4f3e946d6480b75ce103fe7eaffa18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733789215263534966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920be345-6e4a-425f-bcde-0e5463fd92b7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c87cad753cfce5b05fd5987342e13dea86a36bc44f9c4b3a934dd48d2329af3,PodSandboxId:07cf68f38d235286c1e4b79576c7b2d5fd1fb3fd1b16b9f3f13cfe64eaed0c17,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733789203352008941,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r97q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566672d9-989b-4337-84f4-bafd5c70755f,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ce0ccc8b2285ae9861ca675ddee1c7cc4b2eb95f1fc9b3c252e8b7f70e57e2,PodSandboxId:f6e164f7d5dc23cc23d172647390aa33a976f115b0e620a3f1c8ceb1972b50c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733789199
811986108,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsxdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c832ea7354c3849fb453286649b272a5fcf355d47187efa465c13d6fc4d65dd,PodSandboxId:63415c4eed5c67ae421bad643372f730f2058ddc450abd01192ea02adf7fd1cd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378919154
1880372,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853eed8c10d2af858abe4fbec7a3d503,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ad93591d94d418f536035e0dbd58787d7e5c96ef2645619b7bf1fdd88df33c,PodSandboxId:974a006af9e0d5faa5a19f159a45e227208fee0ebf388fba700952547d1d2529,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733789188777226881,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecd4fc4633d25364aa5747e7113ed8c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cc792ca2c2098e4d3b71b355fb33ce358f1d7de4b2454c236712b6102bdaa06,PodSandboxId:94eb5ad94038f1074001a9da0f0356b4211451a99d8498e75f83f6896f01e753,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789188760652300,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2215b8ddb73d6ba5d32a52569092583d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1482c9caeda45e0518bea419e7de0b9d7ea7016563bc8769f4f33151afc52fca,PodSandboxId:2ae901f42d38831fd48a04c073ff0005559a868da016d574361a8a116dae1424,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733789188774438801,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b3080b6aff5134adaf044003845e8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06c286b00c118c3e50e8da5c902bb87ed0d31b055437ffb3e10bda025f3a64d,PodSandboxId:baf6b5fc008a937d1c54f73e03b660e8eddaec5406534b7f3a1ae0650baf4121,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733789188707109904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727141bf4ae4b442d005a5b0b8b6fdb9,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4e88895-f6c9-4be7-b40f-233a7b78fa37 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:13:07 ha-070032 crio[662]: time="2024-12-10 00:13:07.298487952Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f47204e-6edb-41a7-bddd-b705263dc9ee name=/runtime.v1.RuntimeService/Version
	Dec 10 00:13:07 ha-070032 crio[662]: time="2024-12-10 00:13:07.298565742Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f47204e-6edb-41a7-bddd-b705263dc9ee name=/runtime.v1.RuntimeService/Version
	Dec 10 00:13:07 ha-070032 crio[662]: time="2024-12-10 00:13:07.299547463Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee83dd29-a295-4427-8194-016765158ae2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:13:07 ha-070032 crio[662]: time="2024-12-10 00:13:07.300147787Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789587300121399,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee83dd29-a295-4427-8194-016765158ae2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:13:07 ha-070032 crio[662]: time="2024-12-10 00:13:07.300804516Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a504839-96dc-40d8-bbe6-78efe360d091 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:13:07 ha-070032 crio[662]: time="2024-12-10 00:13:07.300857258Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a504839-96dc-40d8-bbe6-78efe360d091 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:13:07 ha-070032 crio[662]: time="2024-12-10 00:13:07.301077668Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c6ab8dccd8ba997291d2b80d419b2b2e05a32373d0202dd8c719263ae30ccdb,PodSandboxId:e3f274c30a3959296a5c030d1ffa934b64c75f95bf0306039097cb7cf68b4fe4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733789355190103686,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-d682h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 856c7b29-d84b-4688-8af1-c6cd60e5c948,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e305236942a6a79ef7f91e253be393cd58488d48aba3b5bf66c479acd0067bc8,PodSandboxId:5a85b4a79da52f1c29e7fcfa81fa0bedc1eb6bfd38fb444179c78f575d79d860,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215377276438,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nqnhw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c81e85b-ea31-43b6-9467-97ff40b0b4a0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c2e334f3ec55e4be646775958650ae4186637afeca4998288d1fbd38037c8ea,PodSandboxId:f558795052a9d75e6b78b43096de277d42834212f92a6b90269650e35759d438,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733789215322355166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fs6l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9a1cf8b4-0a76-41a1-930f-1574e31db324,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0bc6f0cc193d54e7a8a7dd22a38ca0e4d4f61bb51f322d3f41dac47db13c95b,PodSandboxId:3ad98b3ae6d227d0d652ff69ea4b3d54dd4f3e946d6480b75ce103fe7eaffa18,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733789215263534966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 920be345-6e4a-425f-bcde-0e5463fd92b7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c87cad753cfce5b05fd5987342e13dea86a36bc44f9c4b3a934dd48d2329af3,PodSandboxId:07cf68f38d235286c1e4b79576c7b2d5fd1fb3fd1b16b9f3f13cfe64eaed0c17,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733789203352008941,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-r97q9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566672d9-989b-4337-84f4-bafd5c70755f,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7ce0ccc8b2285ae9861ca675ddee1c7cc4b2eb95f1fc9b3c252e8b7f70e57e2,PodSandboxId:f6e164f7d5dc23cc23d172647390aa33a976f115b0e620a3f1c8ceb1972b50c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733789199
811986108,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xsxdp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d403bbd-ec8c-4917-abdd-d1f97d3d0ce5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c832ea7354c3849fb453286649b272a5fcf355d47187efa465c13d6fc4d65dd,PodSandboxId:63415c4eed5c67ae421bad643372f730f2058ddc450abd01192ea02adf7fd1cd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173378919154
1880372,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853eed8c10d2af858abe4fbec7a3d503,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1ad93591d94d418f536035e0dbd58787d7e5c96ef2645619b7bf1fdd88df33c,PodSandboxId:974a006af9e0d5faa5a19f159a45e227208fee0ebf388fba700952547d1d2529,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733789188777226881,Labels:map[string]string{
io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ecd4fc4633d25364aa5747e7113ed8c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cc792ca2c2098e4d3b71b355fb33ce358f1d7de4b2454c236712b6102bdaa06,PodSandboxId:94eb5ad94038f1074001a9da0f0356b4211451a99d8498e75f83f6896f01e753,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733789188760652300,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2215b8ddb73d6ba5d32a52569092583d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1482c9caeda45e0518bea419e7de0b9d7ea7016563bc8769f4f33151afc52fca,PodSandboxId:2ae901f42d38831fd48a04c073ff0005559a868da016d574361a8a116dae1424,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733789188774438801,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kub
e-scheduler-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b5b3080b6aff5134adaf044003845e8,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d06c286b00c118c3e50e8da5c902bb87ed0d31b055437ffb3e10bda025f3a64d,PodSandboxId:baf6b5fc008a937d1c54f73e03b660e8eddaec5406534b7f3a1ae0650baf4121,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733789188707109904,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-070032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 727141bf4ae4b442d005a5b0b8b6fdb9,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a504839-96dc-40d8-bbe6-78efe360d091 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1c6ab8dccd8ba       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   e3f274c30a395       busybox-7dff88458-d682h
	e305236942a6a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   5a85b4a79da52       coredns-7c65d6cfc9-nqnhw
	7c2e334f3ec55       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   f558795052a9d       coredns-7c65d6cfc9-fs6l6
	a0bc6f0cc193d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   3ad98b3ae6d22       storage-provisioner
	4c87cad753cfc       docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3    6 minutes ago       Running             kindnet-cni               0                   07cf68f38d235       kindnet-r97q9
	d7ce0ccc8b228       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   f6e164f7d5dc2       kube-proxy-xsxdp
	2c832ea7354c3       ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517     6 minutes ago       Running             kube-vip                  0                   63415c4eed5c6       kube-vip-ha-070032
	a1ad93591d94d       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   974a006af9e0d       kube-apiserver-ha-070032
	1482c9caeda45       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   2ae901f42d388       kube-scheduler-ha-070032
	3cc792ca2c209       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   94eb5ad94038f       etcd-ha-070032
	d06c286b00c11       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   baf6b5fc008a9       kube-controller-manager-ha-070032
	
	
	==> coredns [7c2e334f3ec55e4be646775958650ae4186637afeca4998288d1fbd38037c8ea] <==
	[INFO] 10.244.3.2:46682 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001449431s
	[INFO] 10.244.1.2:58178 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186321s
	[INFO] 10.244.1.2:50380 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000193258s
	[INFO] 10.244.1.2:46652 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001618s
	[INFO] 10.244.1.2:57883 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003426883s
	[INFO] 10.244.0.4:59352 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009624s
	[INFO] 10.244.0.4:54543 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069497s
	[INFO] 10.244.0.4:53696 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011622s
	[INFO] 10.244.0.4:55436 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000112389s
	[INFO] 10.244.3.2:43114 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001706864s
	[INFO] 10.244.3.2:56624 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088751s
	[INFO] 10.244.3.2:44513 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000074851s
	[INFO] 10.244.3.2:49956 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081755s
	[INFO] 10.244.1.2:40349 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000153721s
	[INFO] 10.244.0.4:44925 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128981s
	[INFO] 10.244.0.4:36252 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088006s
	[INFO] 10.244.0.4:39383 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070489s
	[INFO] 10.244.0.4:51627 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125996s
	[INFO] 10.244.3.2:46896 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118479s
	[INFO] 10.244.1.2:38261 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013128s
	[INFO] 10.244.1.2:58062 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000196774s
	[INFO] 10.244.0.4:47202 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140777s
	[INFO] 10.244.0.4:55399 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000091936s
	[INFO] 10.244.3.2:58172 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126998s
	[INFO] 10.244.3.2:58403 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000107335s
	
	
	==> coredns [e305236942a6a79ef7f91e253be393cd58488d48aba3b5bf66c479acd0067bc8] <==
	[INFO] 10.244.3.2:39118 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.049213372s
	[INFO] 10.244.1.2:47189 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002650171s
	[INFO] 10.244.1.2:60873 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149978s
	[INFO] 10.244.1.2:48109 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137629s
	[INFO] 10.244.1.2:49474 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113792s
	[INFO] 10.244.0.4:41643 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001681013s
	[INFO] 10.244.0.4:48048 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011923s
	[INFO] 10.244.0.4:35726 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000999387s
	[INFO] 10.244.0.4:41981 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00003888s
	[INFO] 10.244.3.2:42883 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156584s
	[INFO] 10.244.3.2:47597 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174459s
	[INFO] 10.244.3.2:52426 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001324612s
	[INFO] 10.244.3.2:51253 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071403s
	[INFO] 10.244.1.2:50492 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118518s
	[INFO] 10.244.1.2:49203 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108258s
	[INFO] 10.244.1.2:51348 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096375s
	[INFO] 10.244.3.2:42362 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000236533s
	[INFO] 10.244.3.2:60373 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010669s
	[INFO] 10.244.3.2:54648 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107013s
	[INFO] 10.244.1.2:49645 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000168571s
	[INFO] 10.244.1.2:37889 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000146602s
	[INFO] 10.244.0.4:44430 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098202s
	[INFO] 10.244.0.4:40310 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093003s
	[INFO] 10.244.3.2:55334 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000110256s
	[INFO] 10.244.3.2:41666 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108876s
	
	
	==> describe nodes <==
	Name:               ha-070032
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-070032
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=ha-070032
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_10T00_06_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:06:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-070032
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:13:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 00:09:38 +0000   Tue, 10 Dec 2024 00:06:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 00:09:38 +0000   Tue, 10 Dec 2024 00:06:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 00:09:38 +0000   Tue, 10 Dec 2024 00:06:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 00:09:38 +0000   Tue, 10 Dec 2024 00:06:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.187
	  Hostname:    ha-070032
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fb099128ff44c2a9726305ea6a63c95
	  System UUID:                8fb09912-8ff4-4c2a-9726-305ea6a63c95
	  Boot ID:                    72ec90c5-f76d-4c2b-9a52-435cb90236ed
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-d682h              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 coredns-7c65d6cfc9-fs6l6             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m28s
	  kube-system                 coredns-7c65d6cfc9-nqnhw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m28s
	  kube-system                 etcd-ha-070032                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m33s
	  kube-system                 kindnet-r97q9                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m28s
	  kube-system                 kube-apiserver-ha-070032             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-controller-manager-ha-070032    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-proxy-xsxdp                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-scheduler-ha-070032             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-vip-ha-070032                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m35s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m27s  kube-proxy       
	  Normal  Starting                 6m33s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m33s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m33s  kubelet          Node ha-070032 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m33s  kubelet          Node ha-070032 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m33s  kubelet          Node ha-070032 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m29s  node-controller  Node ha-070032 event: Registered Node ha-070032 in Controller
	  Normal  NodeReady                6m13s  kubelet          Node ha-070032 status is now: NodeReady
	  Normal  RegisteredNode           5m29s  node-controller  Node ha-070032 event: Registered Node ha-070032 in Controller
	  Normal  RegisteredNode           4m15s  node-controller  Node ha-070032 event: Registered Node ha-070032 in Controller
	
	
	Name:               ha-070032-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-070032-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=ha-070032
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_10T00_07_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:07:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-070032-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:10:24 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 10 Dec 2024 00:09:33 +0000   Tue, 10 Dec 2024 00:11:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 10 Dec 2024 00:09:33 +0000   Tue, 10 Dec 2024 00:11:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 10 Dec 2024 00:09:33 +0000   Tue, 10 Dec 2024 00:11:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 10 Dec 2024 00:09:33 +0000   Tue, 10 Dec 2024 00:11:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    ha-070032-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2c2b302d819044f8ad0494a0ee312d67
	  System UUID:                2c2b302d-8190-44f8-ad04-94a0ee312d67
	  Boot ID:                    b80c4e1c-4168-43bd-ac70-470e7e9703f3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7gbz8                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 etcd-ha-070032-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m35s
	  kube-system                 kindnet-69btk                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m37s
	  kube-system                 kube-apiserver-ha-070032-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-controller-manager-ha-070032-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-proxy-7fm88                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	  kube-system                 kube-scheduler-ha-070032-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-vip-ha-070032-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m32s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     5m37s                  cidrAllocator    Node ha-070032-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  5m37s (x8 over 5m37s)  kubelet          Node ha-070032-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m37s (x8 over 5m37s)  kubelet          Node ha-070032-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m37s (x7 over 5m37s)  kubelet          Node ha-070032-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m34s                  node-controller  Node ha-070032-m02 event: Registered Node ha-070032-m02 in Controller
	  Normal  RegisteredNode           5m29s                  node-controller  Node ha-070032-m02 event: Registered Node ha-070032-m02 in Controller
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-070032-m02 event: Registered Node ha-070032-m02 in Controller
	  Normal  NodeNotReady             2m                     node-controller  Node ha-070032-m02 status is now: NodeNotReady
	
	
	Name:               ha-070032-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-070032-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=ha-070032
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_10T00_08_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:08:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-070032-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:12:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 00:09:45 +0000   Tue, 10 Dec 2024 00:08:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 00:09:45 +0000   Tue, 10 Dec 2024 00:08:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 00:09:45 +0000   Tue, 10 Dec 2024 00:08:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 00:09:45 +0000   Tue, 10 Dec 2024 00:09:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.244
	  Hostname:    ha-070032-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7af7f783967c41bab4027928f3eb1ce2
	  System UUID:                7af7f783-967c-41ba-b402-7928f3eb1ce2
	  Boot ID:                    d7bca268-a1b9-47e2-900d-e8e3d560bcf4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-pw24w                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 etcd-ha-070032-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m22s
	  kube-system                 kindnet-gbrrg                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m24s
	  kube-system                 kube-apiserver-ha-070032-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-controller-manager-ha-070032-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-proxy-bhnsm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-scheduler-ha-070032-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-vip-ha-070032-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m19s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     4m24s                  cidrAllocator    Node ha-070032-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m24s (x8 over 4m24s)  kubelet          Node ha-070032-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m24s (x8 over 4m24s)  kubelet          Node ha-070032-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m24s (x7 over 4m24s)  kubelet          Node ha-070032-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-070032-m03 event: Registered Node ha-070032-m03 in Controller
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-070032-m03 event: Registered Node ha-070032-m03 in Controller
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-070032-m03 event: Registered Node ha-070032-m03 in Controller
	
	
	Name:               ha-070032-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-070032-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=ha-070032
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_10T00_09_50_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:09:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-070032-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:13:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 00:10:20 +0000   Tue, 10 Dec 2024 00:09:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 00:10:20 +0000   Tue, 10 Dec 2024 00:09:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 00:10:20 +0000   Tue, 10 Dec 2024 00:09:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 00:10:20 +0000   Tue, 10 Dec 2024 00:10:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.178
	  Hostname:    ha-070032-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1722ee99e8fc4ae7bbf0809a3824e471
	  System UUID:                1722ee99-e8fc-4ae7-bbf0-809a3824e471
	  Boot ID:                    4df30219-5a9e-41b4-adfb-6890ccd87aac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-knnxw       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m16s
	  kube-system                 kube-proxy-k8xs7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m13s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     3m18s                  cidrAllocator    Node ha-070032-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m18s (x2 over 3m18s)  kubelet          Node ha-070032-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m18s (x2 over 3m18s)  kubelet          Node ha-070032-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m18s (x2 over 3m18s)  kubelet          Node ha-070032-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m15s                  node-controller  Node ha-070032-m04 event: Registered Node ha-070032-m04 in Controller
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-070032-m04 event: Registered Node ha-070032-m04 in Controller
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-070032-m04 event: Registered Node ha-070032-m04 in Controller
	  Normal  NodeReady                2m58s                  kubelet          Node ha-070032-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec10 00:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052638] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037715] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Dec10 00:06] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.906851] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.611346] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.711169] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.053296] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050206] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.175256] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.129791] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.262857] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +3.716566] systemd-fstab-generator[748]: Ignoring "noauto" option for root device
	[  +4.745437] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.059727] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.033385] systemd-fstab-generator[1302]: Ignoring "noauto" option for root device
	[  +0.073983] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.636013] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.381804] kauditd_printk_skb: 38 callbacks suppressed
	[Dec10 00:07] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [3cc792ca2c2098e4d3b71b355fb33ce358f1d7de4b2454c236712b6102bdaa06] <==
	{"level":"warn","ts":"2024-12-10T00:13:07.521274Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:07.558878Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:07.567474Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:07.570875Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:07.579191Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:07.585648Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:07.591628Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:07.594681Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:07.597996Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:07.603102Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:07.609093Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:07.613040Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:07.614852Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:07.618051Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:07.620562Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:07.629522Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:07.633944Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:07.637756Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:07.645107Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:07.649180Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:07.652119Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:07.655248Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:07.661927Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:07.675978Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-10T00:13:07.713129Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f91ecb07db121930","from":"f91ecb07db121930","remote-peer-id":"de7cb460fd4f55eb","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:13:07 up 7 min,  0 users,  load average: 0.18, 0.28, 0.15
	Linux ha-070032 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4c87cad753cfce5b05fd5987342e13dea86a36bc44f9c4b3a934dd48d2329af3] <==
	I1210 00:12:34.365324       1 main.go:324] Node ha-070032-m03 has CIDR [10.244.3.0/24] 
	I1210 00:12:44.361278       1 main.go:297] Handling node with IPs: map[192.168.39.187:{}]
	I1210 00:12:44.361407       1 main.go:301] handling current node
	I1210 00:12:44.361435       1 main.go:297] Handling node with IPs: map[192.168.39.198:{}]
	I1210 00:12:44.361453       1 main.go:324] Node ha-070032-m02 has CIDR [10.244.1.0/24] 
	I1210 00:12:44.361686       1 main.go:297] Handling node with IPs: map[192.168.39.244:{}]
	I1210 00:12:44.361767       1 main.go:324] Node ha-070032-m03 has CIDR [10.244.3.0/24] 
	I1210 00:12:44.361952       1 main.go:297] Handling node with IPs: map[192.168.39.178:{}]
	I1210 00:12:44.361977       1 main.go:324] Node ha-070032-m04 has CIDR [10.244.4.0/24] 
	I1210 00:12:54.368862       1 main.go:297] Handling node with IPs: map[192.168.39.187:{}]
	I1210 00:12:54.368987       1 main.go:301] handling current node
	I1210 00:12:54.369042       1 main.go:297] Handling node with IPs: map[192.168.39.198:{}]
	I1210 00:12:54.369048       1 main.go:324] Node ha-070032-m02 has CIDR [10.244.1.0/24] 
	I1210 00:12:54.369300       1 main.go:297] Handling node with IPs: map[192.168.39.244:{}]
	I1210 00:12:54.369307       1 main.go:324] Node ha-070032-m03 has CIDR [10.244.3.0/24] 
	I1210 00:12:54.369408       1 main.go:297] Handling node with IPs: map[192.168.39.178:{}]
	I1210 00:12:54.369414       1 main.go:324] Node ha-070032-m04 has CIDR [10.244.4.0/24] 
	I1210 00:13:04.367527       1 main.go:297] Handling node with IPs: map[192.168.39.187:{}]
	I1210 00:13:04.367672       1 main.go:301] handling current node
	I1210 00:13:04.367805       1 main.go:297] Handling node with IPs: map[192.168.39.198:{}]
	I1210 00:13:04.367844       1 main.go:324] Node ha-070032-m02 has CIDR [10.244.1.0/24] 
	I1210 00:13:04.368299       1 main.go:297] Handling node with IPs: map[192.168.39.244:{}]
	I1210 00:13:04.368342       1 main.go:324] Node ha-070032-m03 has CIDR [10.244.3.0/24] 
	I1210 00:13:04.368546       1 main.go:297] Handling node with IPs: map[192.168.39.178:{}]
	I1210 00:13:04.368573       1 main.go:324] Node ha-070032-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [a1ad93591d94d418f536035e0dbd58787d7e5c96ef2645619b7bf1fdd88df33c] <==
	W1210 00:06:33.327544       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.187]
	I1210 00:06:33.328436       1 controller.go:615] quota admission added evaluator for: endpoints
	I1210 00:06:33.332351       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 00:06:33.644177       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1210 00:06:34.401030       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1210 00:06:34.426254       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1210 00:06:34.437836       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1210 00:06:39.341658       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1210 00:06:39.388665       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1210 00:09:16.643347       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53112: use of closed network connection
	E1210 00:09:16.826908       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53130: use of closed network connection
	E1210 00:09:17.054445       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53146: use of closed network connection
	E1210 00:09:17.230406       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53174: use of closed network connection
	E1210 00:09:17.395919       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53190: use of closed network connection
	E1210 00:09:17.578908       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53210: use of closed network connection
	E1210 00:09:17.752762       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53234: use of closed network connection
	E1210 00:09:17.924915       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53246: use of closed network connection
	E1210 00:09:18.096320       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53250: use of closed network connection
	E1210 00:09:18.374453       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53288: use of closed network connection
	E1210 00:09:18.551219       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53308: use of closed network connection
	E1210 00:09:18.715487       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53328: use of closed network connection
	E1210 00:09:18.882307       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53350: use of closed network connection
	E1210 00:09:19.053232       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53360: use of closed network connection
	E1210 00:09:19.219127       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:53388: use of closed network connection
	W1210 00:10:43.338652       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.187 192.168.39.244]
	
	
	==> kube-controller-manager [d06c286b00c118c3e50e8da5c902bb87ed0d31b055437ffb3e10bda025f3a64d] <==
	I1210 00:09:49.805217       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-070032-m04" podCIDRs=["10.244.4.0/24"]
	I1210 00:09:49.805335       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:49.805501       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:49.830568       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:50.055099       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:50.429393       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:52.233446       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:53.527465       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:53.529595       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-070032-m04"
	I1210 00:09:53.635341       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:53.748163       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:09:53.769858       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:10:00.115956       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:10:09.020321       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:10:09.021003       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-070032-m04"
	I1210 00:10:09.036523       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:10:12.188838       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:10:20.604295       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m04"
	I1210 00:11:07.214303       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-070032-m04"
	I1210 00:11:07.214659       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m02"
	I1210 00:11:07.239149       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m02"
	I1210 00:11:07.332434       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.113905ms"
	I1210 00:11:07.332808       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="177.2µs"
	I1210 00:11:08.619804       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m02"
	I1210 00:11:12.462357       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-070032-m02"
	
	
	==> kube-proxy [d7ce0ccc8b2285ae9861ca675ddee1c7cc4b2eb95f1fc9b3c252e8b7f70e57e2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1210 00:06:40.034153       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1210 00:06:40.050742       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.187"]
	E1210 00:06:40.050886       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 00:06:40.097328       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1210 00:06:40.097397       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 00:06:40.097429       1 server_linux.go:169] "Using iptables Proxier"
	I1210 00:06:40.099955       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 00:06:40.100221       1 server.go:483] "Version info" version="v1.31.2"
	I1210 00:06:40.100242       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 00:06:40.102079       1 config.go:199] "Starting service config controller"
	I1210 00:06:40.102108       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1210 00:06:40.102130       1 config.go:105] "Starting endpoint slice config controller"
	I1210 00:06:40.102134       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1210 00:06:40.103442       1 config.go:328] "Starting node config controller"
	I1210 00:06:40.103468       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1210 00:06:40.203097       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1210 00:06:40.203185       1 shared_informer.go:320] Caches are synced for service config
	I1210 00:06:40.203635       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1482c9caeda45e0518bea419e7de0b9d7ea7016563bc8769f4f33151afc52fca] <==
	W1210 00:06:32.612869       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1210 00:06:32.612911       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1210 00:06:32.694127       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1210 00:06:32.694210       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 00:06:32.728214       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1210 00:06:32.728261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1210 00:06:32.890681       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1210 00:06:32.890785       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:06:32.906571       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1210 00:06:32.906947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 00:06:33.046474       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1210 00:06:33.046616       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1210 00:06:36.200867       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1210 00:09:49.873453       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-r2tf6\": pod kube-proxy-r2tf6 is already assigned to node \"ha-070032-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-r2tf6" node="ha-070032-m04"
	E1210 00:09:49.876571       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-r2tf6\": pod kube-proxy-r2tf6 is already assigned to node \"ha-070032-m04\"" pod="kube-system/kube-proxy-r2tf6"
	I1210 00:09:49.878867       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-r2tf6" node="ha-070032-m04"
	E1210 00:09:49.879144       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-v5wzl\": pod kindnet-v5wzl is already assigned to node \"ha-070032-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-v5wzl" node="ha-070032-m04"
	E1210 00:09:49.879364       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-v5wzl\": pod kindnet-v5wzl is already assigned to node \"ha-070032-m04\"" pod="kube-system/kindnet-v5wzl"
	I1210 00:09:49.879740       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-v5wzl" node="ha-070032-m04"
	E1210 00:09:49.938476       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-j8rtf\": pod kindnet-j8rtf is already assigned to node \"ha-070032-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-j8rtf" node="ha-070032-m04"
	E1210 00:09:49.939506       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-j8rtf\": pod kindnet-j8rtf is already assigned to node \"ha-070032-m04\"" pod="kube-system/kindnet-j8rtf"
	E1210 00:09:51.707755       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-nqxxb\": pod kindnet-nqxxb is already assigned to node \"ha-070032-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-nqxxb" node="ha-070032-m04"
	E1210 00:09:51.707858       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f925375b-3698-422b-a607-5a92ae55da32(kube-system/kindnet-nqxxb) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-nqxxb"
	E1210 00:09:51.707911       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-nqxxb\": pod kindnet-nqxxb is already assigned to node \"ha-070032-m04\"" pod="kube-system/kindnet-nqxxb"
	I1210 00:09:51.707964       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-nqxxb" node="ha-070032-m04"
	
	
	==> kubelet <==
	Dec 10 00:11:34 ha-070032 kubelet[1308]: E1210 00:11:34.426250    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789494424141935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:34 ha-070032 kubelet[1308]: E1210 00:11:34.426301    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789494424141935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:44 ha-070032 kubelet[1308]: E1210 00:11:44.428969    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789504427653710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:44 ha-070032 kubelet[1308]: E1210 00:11:44.429023    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789504427653710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:54 ha-070032 kubelet[1308]: E1210 00:11:54.430352    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789514430120521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:11:54 ha-070032 kubelet[1308]: E1210 00:11:54.430374    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789514430120521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:04 ha-070032 kubelet[1308]: E1210 00:12:04.432645    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789524431673132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:04 ha-070032 kubelet[1308]: E1210 00:12:04.432732    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789524431673132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:14 ha-070032 kubelet[1308]: E1210 00:12:14.434466    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789534434193110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:14 ha-070032 kubelet[1308]: E1210 00:12:14.434800    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789534434193110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:24 ha-070032 kubelet[1308]: E1210 00:12:24.436591    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789544436265231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:24 ha-070032 kubelet[1308]: E1210 00:12:24.436615    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789544436265231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:34 ha-070032 kubelet[1308]: E1210 00:12:34.323013    1308 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 10 00:12:34 ha-070032 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 10 00:12:34 ha-070032 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 10 00:12:34 ha-070032 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 10 00:12:34 ha-070032 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 10 00:12:34 ha-070032 kubelet[1308]: E1210 00:12:34.438072    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789554437642598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:34 ha-070032 kubelet[1308]: E1210 00:12:34.438102    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789554437642598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:44 ha-070032 kubelet[1308]: E1210 00:12:44.439455    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789564439127012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:44 ha-070032 kubelet[1308]: E1210 00:12:44.439836    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789564439127012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:54 ha-070032 kubelet[1308]: E1210 00:12:54.441399    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789574440681046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:12:54 ha-070032 kubelet[1308]: E1210 00:12:54.441436    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789574440681046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:13:04 ha-070032 kubelet[1308]: E1210 00:13:04.443996    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789584442497732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 00:13:04 ha-070032 kubelet[1308]: E1210 00:13:04.444213    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733789584442497732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-070032 -n ha-070032
helpers_test.go:261: (dbg) Run:  kubectl --context ha-070032 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (385.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-070032 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-070032 -v=7 --alsologtostderr
E1210 00:15:09.288935   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-070032 -v=7 --alsologtostderr: exit status 82 (2m1.864856069s)

                                                
                                                
-- stdout --
	* Stopping node "ha-070032-m04"  ...
	* Stopping node "ha-070032-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 00:13:08.726244  103269 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:13:08.726365  103269 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:13:08.726375  103269 out.go:358] Setting ErrFile to fd 2...
	I1210 00:13:08.726380  103269 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:13:08.726541  103269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 00:13:08.726786  103269 out.go:352] Setting JSON to false
	I1210 00:13:08.726885  103269 mustload.go:65] Loading cluster: ha-070032
	I1210 00:13:08.727304  103269 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:13:08.727411  103269 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:13:08.727594  103269 mustload.go:65] Loading cluster: ha-070032
	I1210 00:13:08.727723  103269 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:13:08.727778  103269 stop.go:39] StopHost: ha-070032-m04
	I1210 00:13:08.728247  103269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:13:08.728308  103269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:13:08.743745  103269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34791
	I1210 00:13:08.744244  103269 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:13:08.744961  103269 main.go:141] libmachine: Using API Version  1
	I1210 00:13:08.744990  103269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:13:08.745367  103269 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:13:08.747887  103269 out.go:177] * Stopping node "ha-070032-m04"  ...
	I1210 00:13:08.749260  103269 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1210 00:13:08.749301  103269 main.go:141] libmachine: (ha-070032-m04) Calling .DriverName
	I1210 00:13:08.749494  103269 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1210 00:13:08.749516  103269 main.go:141] libmachine: (ha-070032-m04) Calling .GetSSHHostname
	I1210 00:13:08.752220  103269 main.go:141] libmachine: (ha-070032-m04) DBG | domain ha-070032-m04 has defined MAC address 52:54:00:e9:12:c3 in network mk-ha-070032
	I1210 00:13:08.752621  103269 main.go:141] libmachine: (ha-070032-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:12:c3", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:09:34 +0000 UTC Type:0 Mac:52:54:00:e9:12:c3 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-070032-m04 Clientid:01:52:54:00:e9:12:c3}
	I1210 00:13:08.752676  103269 main.go:141] libmachine: (ha-070032-m04) DBG | domain ha-070032-m04 has defined IP address 192.168.39.178 and MAC address 52:54:00:e9:12:c3 in network mk-ha-070032
	I1210 00:13:08.752783  103269 main.go:141] libmachine: (ha-070032-m04) Calling .GetSSHPort
	I1210 00:13:08.752947  103269 main.go:141] libmachine: (ha-070032-m04) Calling .GetSSHKeyPath
	I1210 00:13:08.753095  103269 main.go:141] libmachine: (ha-070032-m04) Calling .GetSSHUsername
	I1210 00:13:08.753241  103269 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m04/id_rsa Username:docker}
	I1210 00:13:08.839611  103269 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1210 00:13:08.892873  103269 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1210 00:13:08.946011  103269 main.go:141] libmachine: Stopping "ha-070032-m04"...
	I1210 00:13:08.946051  103269 main.go:141] libmachine: (ha-070032-m04) Calling .GetState
	I1210 00:13:08.947527  103269 main.go:141] libmachine: (ha-070032-m04) Calling .Stop
	I1210 00:13:08.950982  103269 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 0/120
	I1210 00:13:10.112620  103269 main.go:141] libmachine: (ha-070032-m04) Calling .GetState
	I1210 00:13:10.113842  103269 main.go:141] libmachine: Machine "ha-070032-m04" was stopped.
	I1210 00:13:10.113867  103269 stop.go:75] duration metric: took 1.364605613s to stop
	I1210 00:13:10.113887  103269 stop.go:39] StopHost: ha-070032-m03
	I1210 00:13:10.114185  103269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:13:10.114240  103269 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:13:10.130337  103269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37469
	I1210 00:13:10.130847  103269 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:13:10.131340  103269 main.go:141] libmachine: Using API Version  1
	I1210 00:13:10.131362  103269 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:13:10.131662  103269 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:13:10.134450  103269 out.go:177] * Stopping node "ha-070032-m03"  ...
	I1210 00:13:10.135549  103269 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1210 00:13:10.135583  103269 main.go:141] libmachine: (ha-070032-m03) Calling .DriverName
	I1210 00:13:10.135798  103269 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1210 00:13:10.135825  103269 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHHostname
	I1210 00:13:10.138426  103269 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:13:10.138911  103269 main.go:141] libmachine: (ha-070032-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:e7:81", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:08:10 +0000 UTC Type:0 Mac:52:54:00:36:e7:81 Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:ha-070032-m03 Clientid:01:52:54:00:36:e7:81}
	I1210 00:13:10.138959  103269 main.go:141] libmachine: (ha-070032-m03) DBG | domain ha-070032-m03 has defined IP address 192.168.39.244 and MAC address 52:54:00:36:e7:81 in network mk-ha-070032
	I1210 00:13:10.139128  103269 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHPort
	I1210 00:13:10.139302  103269 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHKeyPath
	I1210 00:13:10.139468  103269 main.go:141] libmachine: (ha-070032-m03) Calling .GetSSHUsername
	I1210 00:13:10.139613  103269 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m03/id_rsa Username:docker}
	I1210 00:13:10.227000  103269 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1210 00:13:10.280222  103269 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1210 00:13:10.339097  103269 main.go:141] libmachine: Stopping "ha-070032-m03"...
	I1210 00:13:10.339126  103269 main.go:141] libmachine: (ha-070032-m03) Calling .GetState
	I1210 00:13:10.340811  103269 main.go:141] libmachine: (ha-070032-m03) Calling .Stop
	I1210 00:13:10.344776  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 0/120
	I1210 00:13:11.346315  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 1/120
	I1210 00:13:12.347502  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 2/120
	I1210 00:13:13.348731  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 3/120
	I1210 00:13:14.349933  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 4/120
	I1210 00:13:15.351532  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 5/120
	I1210 00:13:16.353095  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 6/120
	I1210 00:13:17.354260  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 7/120
	I1210 00:13:18.355881  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 8/120
	I1210 00:13:19.357151  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 9/120
	I1210 00:13:20.359007  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 10/120
	I1210 00:13:21.360273  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 11/120
	I1210 00:13:22.361672  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 12/120
	I1210 00:13:23.363336  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 13/120
	I1210 00:13:24.364900  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 14/120
	I1210 00:13:25.366794  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 15/120
	I1210 00:13:26.368202  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 16/120
	I1210 00:13:27.369536  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 17/120
	I1210 00:13:28.370947  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 18/120
	I1210 00:13:29.372292  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 19/120
	I1210 00:13:30.374413  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 20/120
	I1210 00:13:31.376176  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 21/120
	I1210 00:13:32.378855  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 22/120
	I1210 00:13:33.380398  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 23/120
	I1210 00:13:34.381873  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 24/120
	I1210 00:13:35.384573  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 25/120
	I1210 00:13:36.386449  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 26/120
	I1210 00:13:37.387918  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 27/120
	I1210 00:13:38.389604  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 28/120
	I1210 00:13:39.391393  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 29/120
	I1210 00:13:40.393135  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 30/120
	I1210 00:13:41.394741  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 31/120
	I1210 00:13:42.396283  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 32/120
	I1210 00:13:43.397752  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 33/120
	I1210 00:13:44.399439  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 34/120
	I1210 00:13:45.400926  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 35/120
	I1210 00:13:46.402149  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 36/120
	I1210 00:13:47.403486  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 37/120
	I1210 00:13:48.404728  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 38/120
	I1210 00:13:49.406044  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 39/120
	I1210 00:13:50.407753  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 40/120
	I1210 00:13:51.408990  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 41/120
	I1210 00:13:52.410174  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 42/120
	I1210 00:13:53.411311  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 43/120
	I1210 00:13:54.412453  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 44/120
	I1210 00:13:55.413992  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 45/120
	I1210 00:13:56.415361  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 46/120
	I1210 00:13:57.417655  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 47/120
	I1210 00:13:58.419164  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 48/120
	I1210 00:13:59.420333  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 49/120
	I1210 00:14:00.422042  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 50/120
	I1210 00:14:01.423440  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 51/120
	I1210 00:14:02.424840  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 52/120
	I1210 00:14:03.426203  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 53/120
	I1210 00:14:04.427603  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 54/120
	I1210 00:14:05.430002  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 55/120
	I1210 00:14:06.431358  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 56/120
	I1210 00:14:07.433297  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 57/120
	I1210 00:14:08.434619  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 58/120
	I1210 00:14:09.436009  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 59/120
	I1210 00:14:10.437843  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 60/120
	I1210 00:14:11.439197  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 61/120
	I1210 00:14:12.441216  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 62/120
	I1210 00:14:13.442494  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 63/120
	I1210 00:14:14.444127  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 64/120
	I1210 00:14:15.446181  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 65/120
	I1210 00:14:16.447461  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 66/120
	I1210 00:14:17.449122  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 67/120
	I1210 00:14:18.450397  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 68/120
	I1210 00:14:19.451891  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 69/120
	I1210 00:14:20.453926  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 70/120
	I1210 00:14:21.455348  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 71/120
	I1210 00:14:22.456999  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 72/120
	I1210 00:14:23.458257  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 73/120
	I1210 00:14:24.460055  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 74/120
	I1210 00:14:25.462213  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 75/120
	I1210 00:14:26.463801  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 76/120
	I1210 00:14:27.465067  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 77/120
	I1210 00:14:28.466411  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 78/120
	I1210 00:14:29.467791  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 79/120
	I1210 00:14:30.469689  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 80/120
	I1210 00:14:31.471109  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 81/120
	I1210 00:14:32.472405  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 82/120
	I1210 00:14:33.473801  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 83/120
	I1210 00:14:34.475180  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 84/120
	I1210 00:14:35.476930  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 85/120
	I1210 00:14:36.479234  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 86/120
	I1210 00:14:37.480672  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 87/120
	I1210 00:14:38.482149  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 88/120
	I1210 00:14:39.483506  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 89/120
	I1210 00:14:40.485285  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 90/120
	I1210 00:14:41.486739  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 91/120
	I1210 00:14:42.489035  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 92/120
	I1210 00:14:43.490461  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 93/120
	I1210 00:14:44.491825  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 94/120
	I1210 00:14:45.493849  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 95/120
	I1210 00:14:46.495218  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 96/120
	I1210 00:14:47.497138  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 97/120
	I1210 00:14:48.498592  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 98/120
	I1210 00:14:49.500020  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 99/120
	I1210 00:14:50.501645  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 100/120
	I1210 00:14:51.503182  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 101/120
	I1210 00:14:52.504443  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 102/120
	I1210 00:14:53.505915  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 103/120
	I1210 00:14:54.507412  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 104/120
	I1210 00:14:55.509047  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 105/120
	I1210 00:14:56.510338  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 106/120
	I1210 00:14:57.512519  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 107/120
	I1210 00:14:58.514273  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 108/120
	I1210 00:14:59.515755  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 109/120
	I1210 00:15:00.517661  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 110/120
	I1210 00:15:01.519127  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 111/120
	I1210 00:15:02.521247  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 112/120
	I1210 00:15:03.523024  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 113/120
	I1210 00:15:04.524291  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 114/120
	I1210 00:15:05.525932  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 115/120
	I1210 00:15:06.528156  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 116/120
	I1210 00:15:07.529585  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 117/120
	I1210 00:15:08.530899  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 118/120
	I1210 00:15:09.532339  103269 main.go:141] libmachine: (ha-070032-m03) Waiting for machine to stop 119/120
	I1210 00:15:10.533373  103269 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1210 00:15:10.533461  103269 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1210 00:15:10.535707  103269 out.go:201] 
	W1210 00:15:10.537184  103269 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1210 00:15:10.537207  103269 out.go:270] * 
	* 
	W1210 00:15:10.540628  103269 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 00:15:10.541951  103269 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-070032 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-070032 --wait=true -v=7 --alsologtostderr
E1210 00:15:36.992981   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:15:47.491218   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-070032 --wait=true -v=7 --alsologtostderr: (4m21.181471594s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-070032
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-070032 -n ha-070032
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-070032 logs -n 25: (1.994111588s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-070032 cp ha-070032-m03:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m02:/home/docker/cp-test_ha-070032-m03_ha-070032-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032-m02 sudo cat                                          | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m03_ha-070032-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m03:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04:/home/docker/cp-test_ha-070032-m03_ha-070032-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032-m04 sudo cat                                          | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m03_ha-070032-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-070032 cp testdata/cp-test.txt                                                | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3617736836/001/cp-test_ha-070032-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032:/home/docker/cp-test_ha-070032-m04_ha-070032.txt                       |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032 sudo cat                                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m04_ha-070032.txt                                 |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m02:/home/docker/cp-test_ha-070032-m04_ha-070032-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032-m02 sudo cat                                          | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m04_ha-070032-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m03:/home/docker/cp-test_ha-070032-m04_ha-070032-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032-m03 sudo cat                                          | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m04_ha-070032-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-070032 node stop m02 -v=7                                                     | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-070032 node start m02 -v=7                                                    | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-070032 -v=7                                                           | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-070032 -v=7                                                                | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-070032 --wait=true -v=7                                                    | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:15 UTC | 10 Dec 24 00:19 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-070032                                                                | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:19 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/10 00:15:10
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 00:15:10.598169  103771 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:15:10.598303  103771 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:15:10.598314  103771 out.go:358] Setting ErrFile to fd 2...
	I1210 00:15:10.598319  103771 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:15:10.598588  103771 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 00:15:10.599277  103771 out.go:352] Setting JSON to false
	I1210 00:15:10.600512  103771 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7062,"bootTime":1733782649,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:15:10.600667  103771 start.go:139] virtualization: kvm guest
	I1210 00:15:10.603059  103771 out.go:177] * [ha-070032] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 00:15:10.604442  103771 notify.go:220] Checking for updates...
	I1210 00:15:10.604487  103771 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 00:15:10.605786  103771 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:15:10.607020  103771 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:15:10.608351  103771 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:15:10.609675  103771 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:15:10.610868  103771 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:15:10.612418  103771 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:15:10.612526  103771 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:15:10.612978  103771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:15:10.613015  103771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:15:10.628312  103771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40509
	I1210 00:15:10.628773  103771 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:15:10.629326  103771 main.go:141] libmachine: Using API Version  1
	I1210 00:15:10.629353  103771 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:15:10.629755  103771 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:15:10.629920  103771 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:15:10.663515  103771 out.go:177] * Using the kvm2 driver based on existing profile
	I1210 00:15:10.664563  103771 start.go:297] selected driver: kvm2
	I1210 00:15:10.664575  103771 start.go:901] validating driver "kvm2" against &{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.178 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false
default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:15:10.664765  103771 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:15:10.665092  103771 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:15:10.665153  103771 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 00:15:10.679405  103771 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1210 00:15:10.680103  103771 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:15:10.680153  103771 cni.go:84] Creating CNI manager for ""
	I1210 00:15:10.680231  103771 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1210 00:15:10.680289  103771 start.go:340] cluster config:
	{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.178 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:
false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:15:10.680407  103771 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:15:10.682620  103771 out.go:177] * Starting "ha-070032" primary control-plane node in "ha-070032" cluster
	I1210 00:15:10.683791  103771 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:15:10.683844  103771 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1210 00:15:10.683856  103771 cache.go:56] Caching tarball of preloaded images
	I1210 00:15:10.683938  103771 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 00:15:10.683950  103771 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 00:15:10.684059  103771 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:15:10.684272  103771 start.go:360] acquireMachinesLock for ha-070032: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:15:10.684315  103771 start.go:364] duration metric: took 25.728µs to acquireMachinesLock for "ha-070032"
	I1210 00:15:10.684333  103771 start.go:96] Skipping create...Using existing machine configuration
	I1210 00:15:10.684338  103771 fix.go:54] fixHost starting: 
	I1210 00:15:10.684730  103771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:15:10.684792  103771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:15:10.698703  103771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33311
	I1210 00:15:10.699126  103771 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:15:10.699629  103771 main.go:141] libmachine: Using API Version  1
	I1210 00:15:10.699656  103771 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:15:10.699949  103771 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:15:10.700165  103771 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:15:10.700362  103771 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:15:10.701983  103771 fix.go:112] recreateIfNeeded on ha-070032: state=Running err=<nil>
	W1210 00:15:10.702000  103771 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 00:15:10.703624  103771 out.go:177] * Updating the running kvm2 "ha-070032" VM ...
	I1210 00:15:10.704812  103771 machine.go:93] provisionDockerMachine start ...
	I1210 00:15:10.704833  103771 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:15:10.705043  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:15:10.707678  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:10.708162  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:15:10.708189  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:10.708348  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:15:10.708510  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:15:10.708671  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:15:10.708771  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:15:10.708915  103771 main.go:141] libmachine: Using SSH client type: native
	I1210 00:15:10.709155  103771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:15:10.709176  103771 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 00:15:10.811320  103771 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-070032
	
	I1210 00:15:10.811355  103771 main.go:141] libmachine: (ha-070032) Calling .GetMachineName
	I1210 00:15:10.811645  103771 buildroot.go:166] provisioning hostname "ha-070032"
	I1210 00:15:10.811676  103771 main.go:141] libmachine: (ha-070032) Calling .GetMachineName
	I1210 00:15:10.811863  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:15:10.814597  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:10.815107  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:15:10.815130  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:10.815317  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:15:10.815523  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:15:10.815682  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:15:10.815823  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:15:10.816019  103771 main.go:141] libmachine: Using SSH client type: native
	I1210 00:15:10.816209  103771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:15:10.816231  103771 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-070032 && echo "ha-070032" | sudo tee /etc/hostname
	I1210 00:15:10.929401  103771 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-070032
	
	I1210 00:15:10.929448  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:15:10.931892  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:10.932267  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:15:10.932296  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:10.932452  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:15:10.932649  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:15:10.932821  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:15:10.932962  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:15:10.933126  103771 main.go:141] libmachine: Using SSH client type: native
	I1210 00:15:10.933311  103771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:15:10.933326  103771 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-070032' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-070032/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-070032' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 00:15:11.034787  103771 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:15:11.034821  103771 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 00:15:11.034863  103771 buildroot.go:174] setting up certificates
	I1210 00:15:11.034878  103771 provision.go:84] configureAuth start
	I1210 00:15:11.034893  103771 main.go:141] libmachine: (ha-070032) Calling .GetMachineName
	I1210 00:15:11.035195  103771 main.go:141] libmachine: (ha-070032) Calling .GetIP
	I1210 00:15:11.037771  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:11.038159  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:15:11.038186  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:11.038291  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:15:11.040603  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:11.040973  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:15:11.041016  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:11.041143  103771 provision.go:143] copyHostCerts
	I1210 00:15:11.041171  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:15:11.041220  103771 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 00:15:11.041243  103771 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:15:11.041322  103771 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 00:15:11.041428  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:15:11.041454  103771 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 00:15:11.041464  103771 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:15:11.041505  103771 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 00:15:11.041585  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:15:11.041615  103771 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 00:15:11.041624  103771 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:15:11.041653  103771 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 00:15:11.041737  103771 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.ha-070032 san=[127.0.0.1 192.168.39.187 ha-070032 localhost minikube]
	I1210 00:15:11.334330  103771 provision.go:177] copyRemoteCerts
	I1210 00:15:11.334411  103771 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 00:15:11.334445  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:15:11.337216  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:11.337568  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:15:11.337600  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:11.337747  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:15:11.337944  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:15:11.338094  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:15:11.338242  103771 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:15:11.417270  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 00:15:11.417340  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 00:15:11.441313  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 00:15:11.441389  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1210 00:15:11.465183  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 00:15:11.465251  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 00:15:11.488058  103771 provision.go:87] duration metric: took 453.163259ms to configureAuth
	I1210 00:15:11.488082  103771 buildroot.go:189] setting minikube options for container-runtime
	I1210 00:15:11.488287  103771 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:15:11.488358  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:15:11.490911  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:11.491295  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:15:11.491323  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:11.491474  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:15:11.491662  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:15:11.491794  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:15:11.491904  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:15:11.492002  103771 main.go:141] libmachine: Using SSH client type: native
	I1210 00:15:11.492159  103771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:15:11.492174  103771 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 00:16:42.313849  103771 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 00:16:42.313900  103771 machine.go:96] duration metric: took 1m31.60907185s to provisionDockerMachine
	I1210 00:16:42.313921  103771 start.go:293] postStartSetup for "ha-070032" (driver="kvm2")
	I1210 00:16:42.313938  103771 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 00:16:42.313976  103771 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:16:42.314315  103771 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 00:16:42.314358  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:16:42.317604  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:42.318107  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:16:42.318136  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:42.318345  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:16:42.318548  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:16:42.318730  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:16:42.318883  103771 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:16:42.396955  103771 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 00:16:42.401086  103771 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 00:16:42.401105  103771 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 00:16:42.401183  103771 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 00:16:42.401256  103771 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 00:16:42.401267  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /etc/ssl/certs/862962.pem
	I1210 00:16:42.401348  103771 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 00:16:42.410256  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:16:42.432569  103771 start.go:296] duration metric: took 118.633386ms for postStartSetup
	I1210 00:16:42.432612  103771 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:16:42.432888  103771 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1210 00:16:42.432915  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:16:42.435790  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:42.436196  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:16:42.436222  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:42.436374  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:16:42.436549  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:16:42.436692  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:16:42.436824  103771 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	W1210 00:16:42.516159  103771 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1210 00:16:42.516187  103771 fix.go:56] duration metric: took 1m31.831847584s for fixHost
	I1210 00:16:42.516217  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:16:42.519071  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:42.519473  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:16:42.519494  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:42.519673  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:16:42.519878  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:16:42.520043  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:16:42.520206  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:16:42.520387  103771 main.go:141] libmachine: Using SSH client type: native
	I1210 00:16:42.520577  103771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:16:42.520590  103771 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 00:16:42.619141  103771 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733789802.579379677
	
	I1210 00:16:42.619167  103771 fix.go:216] guest clock: 1733789802.579379677
	I1210 00:16:42.619175  103771 fix.go:229] Guest: 2024-12-10 00:16:42.579379677 +0000 UTC Remote: 2024-12-10 00:16:42.516197212 +0000 UTC m=+91.962884276 (delta=63.182465ms)
	I1210 00:16:42.619220  103771 fix.go:200] guest clock delta is within tolerance: 63.182465ms
	I1210 00:16:42.619225  103771 start.go:83] releasing machines lock for "ha-070032", held for 1m31.934899017s
	I1210 00:16:42.619247  103771 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:16:42.619470  103771 main.go:141] libmachine: (ha-070032) Calling .GetIP
	I1210 00:16:42.621975  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:42.622327  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:16:42.622357  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:42.622499  103771 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:16:42.623063  103771 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:16:42.623266  103771 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:16:42.623341  103771 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 00:16:42.623400  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:16:42.623450  103771 ssh_runner.go:195] Run: cat /version.json
	I1210 00:16:42.623471  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:16:42.626110  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:42.626134  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:42.626487  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:16:42.626516  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:42.626551  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:16:42.626582  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:42.626667  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:16:42.626675  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:16:42.626845  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:16:42.626882  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:16:42.626977  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:16:42.626977  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:16:42.627097  103771 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:16:42.627195  103771 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:16:42.699828  103771 ssh_runner.go:195] Run: systemctl --version
	I1210 00:16:42.722140  103771 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 00:16:42.883650  103771 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 00:16:42.889845  103771 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 00:16:42.889907  103771 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 00:16:42.899213  103771 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 00:16:42.899243  103771 start.go:495] detecting cgroup driver to use...
	I1210 00:16:42.899316  103771 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 00:16:42.914795  103771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 00:16:42.927943  103771 docker.go:217] disabling cri-docker service (if available) ...
	I1210 00:16:42.928003  103771 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 00:16:42.940509  103771 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 00:16:42.952543  103771 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 00:16:43.093818  103771 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 00:16:43.231556  103771 docker.go:233] disabling docker service ...
	I1210 00:16:43.231622  103771 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 00:16:43.246638  103771 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 00:16:43.259046  103771 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 00:16:43.399360  103771 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 00:16:43.542018  103771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 00:16:43.555195  103771 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 00:16:43.572208  103771 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 00:16:43.572275  103771 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:16:43.581953  103771 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 00:16:43.582010  103771 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:16:43.592046  103771 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:16:43.601894  103771 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:16:43.610888  103771 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 00:16:43.620246  103771 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:16:43.629413  103771 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:16:43.639149  103771 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:16:43.648162  103771 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 00:16:43.656172  103771 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 00:16:43.664176  103771 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:16:43.806178  103771 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 00:16:44.731566  103771 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 00:16:44.731644  103771 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 00:16:44.736363  103771 start.go:563] Will wait 60s for crictl version
	I1210 00:16:44.736433  103771 ssh_runner.go:195] Run: which crictl
	I1210 00:16:44.739953  103771 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:16:44.776710  103771 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 00:16:44.776835  103771 ssh_runner.go:195] Run: crio --version
	I1210 00:16:44.802038  103771 ssh_runner.go:195] Run: crio --version
	I1210 00:16:44.829452  103771 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 00:16:44.830662  103771 main.go:141] libmachine: (ha-070032) Calling .GetIP
	I1210 00:16:44.833111  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:44.833475  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:16:44.833501  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:44.833765  103771 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 00:16:44.838204  103771 kubeadm.go:883] updating cluster {Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.178 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-sto
rageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 00:16:44.838359  103771 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:16:44.838413  103771 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:16:44.880115  103771 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 00:16:44.880133  103771 crio.go:433] Images already preloaded, skipping extraction
	I1210 00:16:44.880186  103771 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:16:44.916770  103771 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 00:16:44.916798  103771 cache_images.go:84] Images are preloaded, skipping loading
	I1210 00:16:44.916811  103771 kubeadm.go:934] updating node { 192.168.39.187 8443 v1.31.2 crio true true} ...
	I1210 00:16:44.916967  103771 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-070032 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:16:44.917056  103771 ssh_runner.go:195] Run: crio config
	I1210 00:16:44.965626  103771 cni.go:84] Creating CNI manager for ""
	I1210 00:16:44.965650  103771 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1210 00:16:44.965661  103771 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 00:16:44.965685  103771 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.187 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-070032 NodeName:ha-070032 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 00:16:44.965796  103771 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.187
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-070032"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.187"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.187"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 00:16:44.965815  103771 kube-vip.go:115] generating kube-vip config ...
	I1210 00:16:44.965859  103771 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1210 00:16:44.976266  103771 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1210 00:16:44.976383  103771 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1210 00:16:44.976438  103771 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:16:44.984879  103771 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 00:16:44.984929  103771 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1210 00:16:44.993009  103771 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1210 00:16:45.008190  103771 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:16:45.023324  103771 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1210 00:16:45.038064  103771 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1210 00:16:45.055047  103771 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1210 00:16:45.058276  103771 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:16:45.196438  103771 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:16:45.210187  103771 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032 for IP: 192.168.39.187
	I1210 00:16:45.210212  103771 certs.go:194] generating shared ca certs ...
	I1210 00:16:45.210252  103771 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:16:45.210432  103771 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 00:16:45.210476  103771 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 00:16:45.210485  103771 certs.go:256] generating profile certs ...
	I1210 00:16:45.210553  103771 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key
	I1210 00:16:45.210603  103771 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.551195df
	I1210 00:16:45.210619  103771 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.551195df with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.187 192.168.39.198 192.168.39.244 192.168.39.254]
	I1210 00:16:45.353544  103771 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.551195df ...
	I1210 00:16:45.353574  103771 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.551195df: {Name:mk4654b3496b9eef04c053407d2661010f22e0ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:16:45.353742  103771 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.551195df ...
	I1210 00:16:45.353757  103771 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.551195df: {Name:mk08d3f17afea49a4ad236e77fa4cbea3a92387c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:16:45.353827  103771 certs.go:381] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.551195df -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt
	I1210 00:16:45.353988  103771 certs.go:385] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.551195df -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key
	I1210 00:16:45.354128  103771 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key
	I1210 00:16:45.354143  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 00:16:45.354159  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 00:16:45.354170  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 00:16:45.354183  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 00:16:45.354195  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 00:16:45.354211  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 00:16:45.354223  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 00:16:45.354236  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 00:16:45.354282  103771 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 00:16:45.354308  103771 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 00:16:45.354318  103771 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:16:45.354340  103771 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 00:16:45.354363  103771 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:16:45.354383  103771 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 00:16:45.354418  103771 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:16:45.354444  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /usr/share/ca-certificates/862962.pem
	I1210 00:16:45.354457  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:16:45.354469  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem -> /usr/share/ca-certificates/86296.pem
	I1210 00:16:45.355099  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:16:45.378490  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 00:16:45.399893  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:16:45.423690  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:16:45.446556  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 00:16:45.467760  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 00:16:45.489233  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:16:45.511664  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:16:45.532906  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 00:16:45.554290  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:16:45.575870  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 00:16:45.596724  103771 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 00:16:45.611721  103771 ssh_runner.go:195] Run: openssl version
	I1210 00:16:45.617063  103771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 00:16:45.626333  103771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 00:16:45.630294  103771 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 00:16:45.630345  103771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 00:16:45.635359  103771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:16:45.643407  103771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:16:45.653116  103771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:16:45.657148  103771 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:16:45.657194  103771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:16:45.662255  103771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:16:45.670551  103771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 00:16:45.679937  103771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 00:16:45.683774  103771 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 00:16:45.683817  103771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 00:16:45.688854  103771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 00:16:45.697366  103771 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:16:45.701578  103771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 00:16:45.706881  103771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 00:16:45.712179  103771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 00:16:45.717490  103771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 00:16:45.722999  103771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 00:16:45.728056  103771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 00:16:45.732967  103771 kubeadm.go:392] StartCluster: {Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.178 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storag
eclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:16:45.733075  103771 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 00:16:45.733120  103771 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:16:45.768818  103771 cri.go:89] found id: "f2d29a23909f92fda903a5601d13c1c6a2dc667c9c3a1f81d56dee246338a18b"
	I1210 00:16:45.768836  103771 cri.go:89] found id: "0fa25a8d120e2f2c7b154619f684076b5851d4ee3636fe33a6af34540cb69db4"
	I1210 00:16:45.768839  103771 cri.go:89] found id: "ace54247ca40b5deb01ae561833de3524e7ff36138c9356a9512ab5b925bbb88"
	I1210 00:16:45.768842  103771 cri.go:89] found id: "e305236942a6a79ef7f91e253be393cd58488d48aba3b5bf66c479acd0067bc8"
	I1210 00:16:45.768845  103771 cri.go:89] found id: "7c2e334f3ec55e4be646775958650ae4186637afeca4998288d1fbd38037c8ea"
	I1210 00:16:45.768848  103771 cri.go:89] found id: "a0bc6f0cc193d54e7a8a7dd22a38ca0e4d4f61bb51f322d3f41dac47db13c95b"
	I1210 00:16:45.768851  103771 cri.go:89] found id: "4c87cad753cfce5b05fd5987342e13dea86a36bc44f9c4b3a934dd48d2329af3"
	I1210 00:16:45.768853  103771 cri.go:89] found id: "d7ce0ccc8b2285ae9861ca675ddee1c7cc4b2eb95f1fc9b3c252e8b7f70e57e2"
	I1210 00:16:45.768855  103771 cri.go:89] found id: "2c832ea7354c3849fb453286649b272a5fcf355d47187efa465c13d6fc4d65dd"
	I1210 00:16:45.768860  103771 cri.go:89] found id: "a1ad93591d94d418f536035e0dbd58787d7e5c96ef2645619b7bf1fdd88df33c"
	I1210 00:16:45.768863  103771 cri.go:89] found id: "1482c9caeda45e0518bea419e7de0b9d7ea7016563bc8769f4f33151afc52fca"
	I1210 00:16:45.768875  103771 cri.go:89] found id: "3cc792ca2c2098e4d3b71b355fb33ce358f1d7de4b2454c236712b6102bdaa06"
	I1210 00:16:45.768881  103771 cri.go:89] found id: "d06c286b00c118c3e50e8da5c902bb87ed0d31b055437ffb3e10bda025f3a64d"
	I1210 00:16:45.768884  103771 cri.go:89] found id: ""
	I1210 00:16:45.768917  103771 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-070032 -n ha-070032
helpers_test.go:261: (dbg) Run:  kubectl --context ha-070032 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (385.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 stop -v=7 --alsologtostderr
E1210 00:20:09.288937   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:20:47.491729   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-070032 stop -v=7 --alsologtostderr: exit status 82 (2m0.4604368s)

                                                
                                                
-- stdout --
	* Stopping node "ha-070032-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 00:19:51.663922  106064 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:19:51.664327  106064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:19:51.664427  106064 out.go:358] Setting ErrFile to fd 2...
	I1210 00:19:51.664468  106064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:19:51.664853  106064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 00:19:51.665538  106064 out.go:352] Setting JSON to false
	I1210 00:19:51.665648  106064 mustload.go:65] Loading cluster: ha-070032
	I1210 00:19:51.666109  106064 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:19:51.666206  106064 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:19:51.666388  106064 mustload.go:65] Loading cluster: ha-070032
	I1210 00:19:51.666577  106064 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:19:51.666604  106064 stop.go:39] StopHost: ha-070032-m04
	I1210 00:19:51.667049  106064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:19:51.667106  106064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:19:51.682109  106064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44993
	I1210 00:19:51.682645  106064 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:19:51.683182  106064 main.go:141] libmachine: Using API Version  1
	I1210 00:19:51.683206  106064 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:19:51.683507  106064 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:19:51.685711  106064 out.go:177] * Stopping node "ha-070032-m04"  ...
	I1210 00:19:51.686985  106064 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1210 00:19:51.687008  106064 main.go:141] libmachine: (ha-070032-m04) Calling .DriverName
	I1210 00:19:51.687210  106064 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1210 00:19:51.687230  106064 main.go:141] libmachine: (ha-070032-m04) Calling .GetSSHHostname
	I1210 00:19:51.689725  106064 main.go:141] libmachine: (ha-070032-m04) DBG | domain ha-070032-m04 has defined MAC address 52:54:00:e9:12:c3 in network mk-ha-070032
	I1210 00:19:51.690193  106064 main.go:141] libmachine: (ha-070032-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:12:c3", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:19:20 +0000 UTC Type:0 Mac:52:54:00:e9:12:c3 Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:ha-070032-m04 Clientid:01:52:54:00:e9:12:c3}
	I1210 00:19:51.690221  106064 main.go:141] libmachine: (ha-070032-m04) DBG | domain ha-070032-m04 has defined IP address 192.168.39.178 and MAC address 52:54:00:e9:12:c3 in network mk-ha-070032
	I1210 00:19:51.690334  106064 main.go:141] libmachine: (ha-070032-m04) Calling .GetSSHPort
	I1210 00:19:51.690522  106064 main.go:141] libmachine: (ha-070032-m04) Calling .GetSSHKeyPath
	I1210 00:19:51.690787  106064 main.go:141] libmachine: (ha-070032-m04) Calling .GetSSHUsername
	I1210 00:19:51.690950  106064 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032-m04/id_rsa Username:docker}
	I1210 00:19:51.772179  106064 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1210 00:19:51.823940  106064 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1210 00:19:51.875569  106064 main.go:141] libmachine: Stopping "ha-070032-m04"...
	I1210 00:19:51.875594  106064 main.go:141] libmachine: (ha-070032-m04) Calling .GetState
	I1210 00:19:51.876973  106064 main.go:141] libmachine: (ha-070032-m04) Calling .Stop
	I1210 00:19:51.880191  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 0/120
	I1210 00:19:52.881534  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 1/120
	I1210 00:19:53.882912  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 2/120
	I1210 00:19:54.884221  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 3/120
	I1210 00:19:55.885704  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 4/120
	I1210 00:19:56.887776  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 5/120
	I1210 00:19:57.889124  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 6/120
	I1210 00:19:58.890415  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 7/120
	I1210 00:19:59.891829  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 8/120
	I1210 00:20:00.893182  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 9/120
	I1210 00:20:01.895302  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 10/120
	I1210 00:20:02.896618  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 11/120
	I1210 00:20:03.897863  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 12/120
	I1210 00:20:04.899314  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 13/120
	I1210 00:20:05.901077  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 14/120
	I1210 00:20:06.902861  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 15/120
	I1210 00:20:07.904861  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 16/120
	I1210 00:20:08.906395  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 17/120
	I1210 00:20:09.908625  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 18/120
	I1210 00:20:10.910017  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 19/120
	I1210 00:20:11.912294  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 20/120
	I1210 00:20:12.913586  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 21/120
	I1210 00:20:13.915009  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 22/120
	I1210 00:20:14.916340  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 23/120
	I1210 00:20:15.917777  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 24/120
	I1210 00:20:16.919665  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 25/120
	I1210 00:20:17.921003  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 26/120
	I1210 00:20:18.922251  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 27/120
	I1210 00:20:19.923557  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 28/120
	I1210 00:20:20.924858  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 29/120
	I1210 00:20:21.926816  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 30/120
	I1210 00:20:22.928196  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 31/120
	I1210 00:20:23.929398  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 32/120
	I1210 00:20:24.930685  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 33/120
	I1210 00:20:25.931738  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 34/120
	I1210 00:20:26.933644  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 35/120
	I1210 00:20:27.934880  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 36/120
	I1210 00:20:28.936267  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 37/120
	I1210 00:20:29.937685  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 38/120
	I1210 00:20:30.939295  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 39/120
	I1210 00:20:31.941301  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 40/120
	I1210 00:20:32.942437  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 41/120
	I1210 00:20:33.943884  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 42/120
	I1210 00:20:34.945336  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 43/120
	I1210 00:20:35.946536  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 44/120
	I1210 00:20:36.948607  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 45/120
	I1210 00:20:37.949747  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 46/120
	I1210 00:20:38.951308  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 47/120
	I1210 00:20:39.953058  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 48/120
	I1210 00:20:40.954605  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 49/120
	I1210 00:20:41.956694  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 50/120
	I1210 00:20:42.958008  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 51/120
	I1210 00:20:43.959467  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 52/120
	I1210 00:20:44.960768  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 53/120
	I1210 00:20:45.962935  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 54/120
	I1210 00:20:46.964839  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 55/120
	I1210 00:20:47.966370  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 56/120
	I1210 00:20:48.968197  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 57/120
	I1210 00:20:49.969978  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 58/120
	I1210 00:20:50.971695  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 59/120
	I1210 00:20:51.973852  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 60/120
	I1210 00:20:52.975211  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 61/120
	I1210 00:20:53.977068  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 62/120
	I1210 00:20:54.978557  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 63/120
	I1210 00:20:55.979941  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 64/120
	I1210 00:20:56.982075  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 65/120
	I1210 00:20:57.983496  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 66/120
	I1210 00:20:58.984844  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 67/120
	I1210 00:20:59.986310  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 68/120
	I1210 00:21:00.987881  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 69/120
	I1210 00:21:01.989876  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 70/120
	I1210 00:21:02.991126  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 71/120
	I1210 00:21:03.992463  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 72/120
	I1210 00:21:04.994488  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 73/120
	I1210 00:21:05.995725  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 74/120
	I1210 00:21:06.997596  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 75/120
	I1210 00:21:07.998875  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 76/120
	I1210 00:21:09.000870  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 77/120
	I1210 00:21:10.002271  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 78/120
	I1210 00:21:11.003605  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 79/120
	I1210 00:21:12.005839  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 80/120
	I1210 00:21:13.007076  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 81/120
	I1210 00:21:14.009003  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 82/120
	I1210 00:21:15.010271  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 83/120
	I1210 00:21:16.011795  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 84/120
	I1210 00:21:17.013768  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 85/120
	I1210 00:21:18.015402  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 86/120
	I1210 00:21:19.016801  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 87/120
	I1210 00:21:20.018041  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 88/120
	I1210 00:21:21.019328  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 89/120
	I1210 00:21:22.021369  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 90/120
	I1210 00:21:23.023390  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 91/120
	I1210 00:21:24.024889  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 92/120
	I1210 00:21:25.026496  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 93/120
	I1210 00:21:26.027759  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 94/120
	I1210 00:21:27.029692  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 95/120
	I1210 00:21:28.030979  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 96/120
	I1210 00:21:29.032990  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 97/120
	I1210 00:21:30.034270  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 98/120
	I1210 00:21:31.035800  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 99/120
	I1210 00:21:32.037791  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 100/120
	I1210 00:21:33.039472  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 101/120
	I1210 00:21:34.041479  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 102/120
	I1210 00:21:35.043095  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 103/120
	I1210 00:21:36.044351  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 104/120
	I1210 00:21:37.046525  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 105/120
	I1210 00:21:38.047978  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 106/120
	I1210 00:21:39.049190  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 107/120
	I1210 00:21:40.050624  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 108/120
	I1210 00:21:41.051986  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 109/120
	I1210 00:21:42.054180  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 110/120
	I1210 00:21:43.055749  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 111/120
	I1210 00:21:44.057002  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 112/120
	I1210 00:21:45.058218  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 113/120
	I1210 00:21:46.060532  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 114/120
	I1210 00:21:47.062643  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 115/120
	I1210 00:21:48.064079  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 116/120
	I1210 00:21:49.065702  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 117/120
	I1210 00:21:50.066894  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 118/120
	I1210 00:21:51.068148  106064 main.go:141] libmachine: (ha-070032-m04) Waiting for machine to stop 119/120
	I1210 00:21:52.069001  106064 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1210 00:21:52.069110  106064 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1210 00:21:52.070885  106064 out.go:201] 
	W1210 00:21:52.072257  106064 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1210 00:21:52.072276  106064 out.go:270] * 
	* 
	W1210 00:21:52.075476  106064 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 00:21:52.076796  106064 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-070032 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-070032 status -v=7 --alsologtostderr: (19.088124053s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-070032 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-070032 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-070032 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-070032 -n ha-070032
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-070032 logs -n 25: (1.904462685s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-070032 ssh -n ha-070032-m02 sudo cat                                          | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m03_ha-070032-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m03:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04:/home/docker/cp-test_ha-070032-m03_ha-070032-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032-m04 sudo cat                                          | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m03_ha-070032-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-070032 cp testdata/cp-test.txt                                                | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3617736836/001/cp-test_ha-070032-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032:/home/docker/cp-test_ha-070032-m04_ha-070032.txt                       |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032 sudo cat                                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m04_ha-070032.txt                                 |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m02:/home/docker/cp-test_ha-070032-m04_ha-070032-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032-m02 sudo cat                                          | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m04_ha-070032-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m03:/home/docker/cp-test_ha-070032-m04_ha-070032-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n                                                                 | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | ha-070032-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-070032 ssh -n ha-070032-m03 sudo cat                                          | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC | 10 Dec 24 00:10 UTC |
	|         | /home/docker/cp-test_ha-070032-m04_ha-070032-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-070032 node stop m02 -v=7                                                     | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:10 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-070032 node start m02 -v=7                                                    | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-070032 -v=7                                                           | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-070032 -v=7                                                                | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-070032 --wait=true -v=7                                                    | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:15 UTC | 10 Dec 24 00:19 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-070032                                                                | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:19 UTC |                     |
	| node    | ha-070032 node delete m03 -v=7                                                   | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:19 UTC | 10 Dec 24 00:19 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-070032 stop -v=7                                                              | ha-070032 | jenkins | v1.34.0 | 10 Dec 24 00:19 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/10 00:15:10
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 00:15:10.598169  103771 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:15:10.598303  103771 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:15:10.598314  103771 out.go:358] Setting ErrFile to fd 2...
	I1210 00:15:10.598319  103771 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:15:10.598588  103771 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 00:15:10.599277  103771 out.go:352] Setting JSON to false
	I1210 00:15:10.600512  103771 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7062,"bootTime":1733782649,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:15:10.600667  103771 start.go:139] virtualization: kvm guest
	I1210 00:15:10.603059  103771 out.go:177] * [ha-070032] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 00:15:10.604442  103771 notify.go:220] Checking for updates...
	I1210 00:15:10.604487  103771 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 00:15:10.605786  103771 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:15:10.607020  103771 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:15:10.608351  103771 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:15:10.609675  103771 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:15:10.610868  103771 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:15:10.612418  103771 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:15:10.612526  103771 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:15:10.612978  103771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:15:10.613015  103771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:15:10.628312  103771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40509
	I1210 00:15:10.628773  103771 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:15:10.629326  103771 main.go:141] libmachine: Using API Version  1
	I1210 00:15:10.629353  103771 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:15:10.629755  103771 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:15:10.629920  103771 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:15:10.663515  103771 out.go:177] * Using the kvm2 driver based on existing profile
	I1210 00:15:10.664563  103771 start.go:297] selected driver: kvm2
	I1210 00:15:10.664575  103771 start.go:901] validating driver "kvm2" against &{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.178 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false
default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:15:10.664765  103771 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:15:10.665092  103771 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:15:10.665153  103771 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 00:15:10.679405  103771 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1210 00:15:10.680103  103771 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:15:10.680153  103771 cni.go:84] Creating CNI manager for ""
	I1210 00:15:10.680231  103771 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1210 00:15:10.680289  103771 start.go:340] cluster config:
	{Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.178 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:
false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:15:10.680407  103771 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:15:10.682620  103771 out.go:177] * Starting "ha-070032" primary control-plane node in "ha-070032" cluster
	I1210 00:15:10.683791  103771 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:15:10.683844  103771 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1210 00:15:10.683856  103771 cache.go:56] Caching tarball of preloaded images
	I1210 00:15:10.683938  103771 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 00:15:10.683950  103771 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 00:15:10.684059  103771 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/config.json ...
	I1210 00:15:10.684272  103771 start.go:360] acquireMachinesLock for ha-070032: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:15:10.684315  103771 start.go:364] duration metric: took 25.728µs to acquireMachinesLock for "ha-070032"
	I1210 00:15:10.684333  103771 start.go:96] Skipping create...Using existing machine configuration
	I1210 00:15:10.684338  103771 fix.go:54] fixHost starting: 
	I1210 00:15:10.684730  103771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:15:10.684792  103771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:15:10.698703  103771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33311
	I1210 00:15:10.699126  103771 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:15:10.699629  103771 main.go:141] libmachine: Using API Version  1
	I1210 00:15:10.699656  103771 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:15:10.699949  103771 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:15:10.700165  103771 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:15:10.700362  103771 main.go:141] libmachine: (ha-070032) Calling .GetState
	I1210 00:15:10.701983  103771 fix.go:112] recreateIfNeeded on ha-070032: state=Running err=<nil>
	W1210 00:15:10.702000  103771 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 00:15:10.703624  103771 out.go:177] * Updating the running kvm2 "ha-070032" VM ...
	I1210 00:15:10.704812  103771 machine.go:93] provisionDockerMachine start ...
	I1210 00:15:10.704833  103771 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:15:10.705043  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:15:10.707678  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:10.708162  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:15:10.708189  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:10.708348  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:15:10.708510  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:15:10.708671  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:15:10.708771  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:15:10.708915  103771 main.go:141] libmachine: Using SSH client type: native
	I1210 00:15:10.709155  103771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:15:10.709176  103771 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 00:15:10.811320  103771 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-070032
	
	I1210 00:15:10.811355  103771 main.go:141] libmachine: (ha-070032) Calling .GetMachineName
	I1210 00:15:10.811645  103771 buildroot.go:166] provisioning hostname "ha-070032"
	I1210 00:15:10.811676  103771 main.go:141] libmachine: (ha-070032) Calling .GetMachineName
	I1210 00:15:10.811863  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:15:10.814597  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:10.815107  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:15:10.815130  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:10.815317  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:15:10.815523  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:15:10.815682  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:15:10.815823  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:15:10.816019  103771 main.go:141] libmachine: Using SSH client type: native
	I1210 00:15:10.816209  103771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:15:10.816231  103771 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-070032 && echo "ha-070032" | sudo tee /etc/hostname
	I1210 00:15:10.929401  103771 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-070032
	
	I1210 00:15:10.929448  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:15:10.931892  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:10.932267  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:15:10.932296  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:10.932452  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:15:10.932649  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:15:10.932821  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:15:10.932962  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:15:10.933126  103771 main.go:141] libmachine: Using SSH client type: native
	I1210 00:15:10.933311  103771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:15:10.933326  103771 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-070032' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-070032/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-070032' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 00:15:11.034787  103771 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:15:11.034821  103771 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 00:15:11.034863  103771 buildroot.go:174] setting up certificates
	I1210 00:15:11.034878  103771 provision.go:84] configureAuth start
	I1210 00:15:11.034893  103771 main.go:141] libmachine: (ha-070032) Calling .GetMachineName
	I1210 00:15:11.035195  103771 main.go:141] libmachine: (ha-070032) Calling .GetIP
	I1210 00:15:11.037771  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:11.038159  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:15:11.038186  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:11.038291  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:15:11.040603  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:11.040973  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:15:11.041016  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:11.041143  103771 provision.go:143] copyHostCerts
	I1210 00:15:11.041171  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:15:11.041220  103771 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 00:15:11.041243  103771 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:15:11.041322  103771 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 00:15:11.041428  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:15:11.041454  103771 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 00:15:11.041464  103771 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:15:11.041505  103771 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 00:15:11.041585  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:15:11.041615  103771 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 00:15:11.041624  103771 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:15:11.041653  103771 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 00:15:11.041737  103771 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.ha-070032 san=[127.0.0.1 192.168.39.187 ha-070032 localhost minikube]
	I1210 00:15:11.334330  103771 provision.go:177] copyRemoteCerts
	I1210 00:15:11.334411  103771 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 00:15:11.334445  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:15:11.337216  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:11.337568  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:15:11.337600  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:11.337747  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:15:11.337944  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:15:11.338094  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:15:11.338242  103771 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:15:11.417270  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 00:15:11.417340  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 00:15:11.441313  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 00:15:11.441389  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1210 00:15:11.465183  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 00:15:11.465251  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 00:15:11.488058  103771 provision.go:87] duration metric: took 453.163259ms to configureAuth
	I1210 00:15:11.488082  103771 buildroot.go:189] setting minikube options for container-runtime
	I1210 00:15:11.488287  103771 config.go:182] Loaded profile config "ha-070032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:15:11.488358  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:15:11.490911  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:11.491295  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:15:11.491323  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:15:11.491474  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:15:11.491662  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:15:11.491794  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:15:11.491904  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:15:11.492002  103771 main.go:141] libmachine: Using SSH client type: native
	I1210 00:15:11.492159  103771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:15:11.492174  103771 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 00:16:42.313849  103771 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 00:16:42.313900  103771 machine.go:96] duration metric: took 1m31.60907185s to provisionDockerMachine
	I1210 00:16:42.313921  103771 start.go:293] postStartSetup for "ha-070032" (driver="kvm2")
	I1210 00:16:42.313938  103771 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 00:16:42.313976  103771 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:16:42.314315  103771 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 00:16:42.314358  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:16:42.317604  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:42.318107  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:16:42.318136  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:42.318345  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:16:42.318548  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:16:42.318730  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:16:42.318883  103771 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:16:42.396955  103771 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 00:16:42.401086  103771 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 00:16:42.401105  103771 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 00:16:42.401183  103771 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 00:16:42.401256  103771 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 00:16:42.401267  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /etc/ssl/certs/862962.pem
	I1210 00:16:42.401348  103771 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 00:16:42.410256  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:16:42.432569  103771 start.go:296] duration metric: took 118.633386ms for postStartSetup
	I1210 00:16:42.432612  103771 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:16:42.432888  103771 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1210 00:16:42.432915  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:16:42.435790  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:42.436196  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:16:42.436222  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:42.436374  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:16:42.436549  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:16:42.436692  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:16:42.436824  103771 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	W1210 00:16:42.516159  103771 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1210 00:16:42.516187  103771 fix.go:56] duration metric: took 1m31.831847584s for fixHost
	I1210 00:16:42.516217  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:16:42.519071  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:42.519473  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:16:42.519494  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:42.519673  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:16:42.519878  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:16:42.520043  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:16:42.520206  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:16:42.520387  103771 main.go:141] libmachine: Using SSH client type: native
	I1210 00:16:42.520577  103771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I1210 00:16:42.520590  103771 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 00:16:42.619141  103771 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733789802.579379677
	
	I1210 00:16:42.619167  103771 fix.go:216] guest clock: 1733789802.579379677
	I1210 00:16:42.619175  103771 fix.go:229] Guest: 2024-12-10 00:16:42.579379677 +0000 UTC Remote: 2024-12-10 00:16:42.516197212 +0000 UTC m=+91.962884276 (delta=63.182465ms)
	I1210 00:16:42.619220  103771 fix.go:200] guest clock delta is within tolerance: 63.182465ms
	I1210 00:16:42.619225  103771 start.go:83] releasing machines lock for "ha-070032", held for 1m31.934899017s
	I1210 00:16:42.619247  103771 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:16:42.619470  103771 main.go:141] libmachine: (ha-070032) Calling .GetIP
	I1210 00:16:42.621975  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:42.622327  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:16:42.622357  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:42.622499  103771 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:16:42.623063  103771 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:16:42.623266  103771 main.go:141] libmachine: (ha-070032) Calling .DriverName
	I1210 00:16:42.623341  103771 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 00:16:42.623400  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:16:42.623450  103771 ssh_runner.go:195] Run: cat /version.json
	I1210 00:16:42.623471  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHHostname
	I1210 00:16:42.626110  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:42.626134  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:42.626487  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:16:42.626516  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:42.626551  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:16:42.626582  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:42.626667  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:16:42.626675  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHPort
	I1210 00:16:42.626845  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:16:42.626882  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHKeyPath
	I1210 00:16:42.626977  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:16:42.626977  103771 main.go:141] libmachine: (ha-070032) Calling .GetSSHUsername
	I1210 00:16:42.627097  103771 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:16:42.627195  103771 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/ha-070032/id_rsa Username:docker}
	I1210 00:16:42.699828  103771 ssh_runner.go:195] Run: systemctl --version
	I1210 00:16:42.722140  103771 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 00:16:42.883650  103771 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 00:16:42.889845  103771 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 00:16:42.889907  103771 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 00:16:42.899213  103771 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 00:16:42.899243  103771 start.go:495] detecting cgroup driver to use...
	I1210 00:16:42.899316  103771 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 00:16:42.914795  103771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 00:16:42.927943  103771 docker.go:217] disabling cri-docker service (if available) ...
	I1210 00:16:42.928003  103771 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 00:16:42.940509  103771 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 00:16:42.952543  103771 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 00:16:43.093818  103771 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 00:16:43.231556  103771 docker.go:233] disabling docker service ...
	I1210 00:16:43.231622  103771 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 00:16:43.246638  103771 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 00:16:43.259046  103771 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 00:16:43.399360  103771 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 00:16:43.542018  103771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 00:16:43.555195  103771 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 00:16:43.572208  103771 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 00:16:43.572275  103771 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:16:43.581953  103771 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 00:16:43.582010  103771 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:16:43.592046  103771 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:16:43.601894  103771 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:16:43.610888  103771 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 00:16:43.620246  103771 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:16:43.629413  103771 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:16:43.639149  103771 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:16:43.648162  103771 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 00:16:43.656172  103771 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 00:16:43.664176  103771 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:16:43.806178  103771 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 00:16:44.731566  103771 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 00:16:44.731644  103771 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 00:16:44.736363  103771 start.go:563] Will wait 60s for crictl version
	I1210 00:16:44.736433  103771 ssh_runner.go:195] Run: which crictl
	I1210 00:16:44.739953  103771 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:16:44.776710  103771 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 00:16:44.776835  103771 ssh_runner.go:195] Run: crio --version
	I1210 00:16:44.802038  103771 ssh_runner.go:195] Run: crio --version
	I1210 00:16:44.829452  103771 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 00:16:44.830662  103771 main.go:141] libmachine: (ha-070032) Calling .GetIP
	I1210 00:16:44.833111  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:44.833475  103771 main.go:141] libmachine: (ha-070032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:ce:dc", ip: ""} in network mk-ha-070032: {Iface:virbr1 ExpiryTime:2024-12-10 01:06:07 +0000 UTC Type:0 Mac:52:54:00:ad:ce:dc Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:ha-070032 Clientid:01:52:54:00:ad:ce:dc}
	I1210 00:16:44.833501  103771 main.go:141] libmachine: (ha-070032) DBG | domain ha-070032 has defined IP address 192.168.39.187 and MAC address 52:54:00:ad:ce:dc in network mk-ha-070032
	I1210 00:16:44.833765  103771 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 00:16:44.838204  103771 kubeadm.go:883] updating cluster {Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.178 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-sto
rageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 00:16:44.838359  103771 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:16:44.838413  103771 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:16:44.880115  103771 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 00:16:44.880133  103771 crio.go:433] Images already preloaded, skipping extraction
	I1210 00:16:44.880186  103771 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:16:44.916770  103771 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 00:16:44.916798  103771 cache_images.go:84] Images are preloaded, skipping loading
	I1210 00:16:44.916811  103771 kubeadm.go:934] updating node { 192.168.39.187 8443 v1.31.2 crio true true} ...
	I1210 00:16:44.916967  103771 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-070032 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:16:44.917056  103771 ssh_runner.go:195] Run: crio config
	I1210 00:16:44.965626  103771 cni.go:84] Creating CNI manager for ""
	I1210 00:16:44.965650  103771 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1210 00:16:44.965661  103771 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 00:16:44.965685  103771 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.187 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-070032 NodeName:ha-070032 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 00:16:44.965796  103771 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.187
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-070032"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.187"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.187"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 00:16:44.965815  103771 kube-vip.go:115] generating kube-vip config ...
	I1210 00:16:44.965859  103771 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1210 00:16:44.976266  103771 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1210 00:16:44.976383  103771 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1210 00:16:44.976438  103771 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:16:44.984879  103771 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 00:16:44.984929  103771 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1210 00:16:44.993009  103771 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1210 00:16:45.008190  103771 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:16:45.023324  103771 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1210 00:16:45.038064  103771 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1210 00:16:45.055047  103771 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1210 00:16:45.058276  103771 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:16:45.196438  103771 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:16:45.210187  103771 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032 for IP: 192.168.39.187
	I1210 00:16:45.210212  103771 certs.go:194] generating shared ca certs ...
	I1210 00:16:45.210252  103771 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:16:45.210432  103771 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 00:16:45.210476  103771 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 00:16:45.210485  103771 certs.go:256] generating profile certs ...
	I1210 00:16:45.210553  103771 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/client.key
	I1210 00:16:45.210603  103771 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.551195df
	I1210 00:16:45.210619  103771 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.551195df with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.187 192.168.39.198 192.168.39.244 192.168.39.254]
	I1210 00:16:45.353544  103771 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.551195df ...
	I1210 00:16:45.353574  103771 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.551195df: {Name:mk4654b3496b9eef04c053407d2661010f22e0ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:16:45.353742  103771 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.551195df ...
	I1210 00:16:45.353757  103771 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.551195df: {Name:mk08d3f17afea49a4ad236e77fa4cbea3a92387c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:16:45.353827  103771 certs.go:381] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt.551195df -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt
	I1210 00:16:45.353988  103771 certs.go:385] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key.551195df -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key
	I1210 00:16:45.354128  103771 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key
	I1210 00:16:45.354143  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 00:16:45.354159  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 00:16:45.354170  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 00:16:45.354183  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 00:16:45.354195  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 00:16:45.354211  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 00:16:45.354223  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 00:16:45.354236  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 00:16:45.354282  103771 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 00:16:45.354308  103771 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 00:16:45.354318  103771 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:16:45.354340  103771 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 00:16:45.354363  103771 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:16:45.354383  103771 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 00:16:45.354418  103771 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:16:45.354444  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /usr/share/ca-certificates/862962.pem
	I1210 00:16:45.354457  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:16:45.354469  103771 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem -> /usr/share/ca-certificates/86296.pem
	I1210 00:16:45.355099  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:16:45.378490  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 00:16:45.399893  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:16:45.423690  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:16:45.446556  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 00:16:45.467760  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 00:16:45.489233  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:16:45.511664  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/ha-070032/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:16:45.532906  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 00:16:45.554290  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:16:45.575870  103771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 00:16:45.596724  103771 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 00:16:45.611721  103771 ssh_runner.go:195] Run: openssl version
	I1210 00:16:45.617063  103771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 00:16:45.626333  103771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 00:16:45.630294  103771 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 00:16:45.630345  103771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 00:16:45.635359  103771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:16:45.643407  103771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:16:45.653116  103771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:16:45.657148  103771 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:16:45.657194  103771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:16:45.662255  103771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:16:45.670551  103771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 00:16:45.679937  103771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 00:16:45.683774  103771 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 00:16:45.683817  103771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 00:16:45.688854  103771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 00:16:45.697366  103771 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:16:45.701578  103771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 00:16:45.706881  103771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 00:16:45.712179  103771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 00:16:45.717490  103771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 00:16:45.722999  103771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 00:16:45.728056  103771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 00:16:45.732967  103771 kubeadm.go:392] StartCluster: {Name:ha-070032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-070032 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.198 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.244 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.178 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storag
eclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:16:45.733075  103771 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 00:16:45.733120  103771 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:16:45.768818  103771 cri.go:89] found id: "f2d29a23909f92fda903a5601d13c1c6a2dc667c9c3a1f81d56dee246338a18b"
	I1210 00:16:45.768836  103771 cri.go:89] found id: "0fa25a8d120e2f2c7b154619f684076b5851d4ee3636fe33a6af34540cb69db4"
	I1210 00:16:45.768839  103771 cri.go:89] found id: "ace54247ca40b5deb01ae561833de3524e7ff36138c9356a9512ab5b925bbb88"
	I1210 00:16:45.768842  103771 cri.go:89] found id: "e305236942a6a79ef7f91e253be393cd58488d48aba3b5bf66c479acd0067bc8"
	I1210 00:16:45.768845  103771 cri.go:89] found id: "7c2e334f3ec55e4be646775958650ae4186637afeca4998288d1fbd38037c8ea"
	I1210 00:16:45.768848  103771 cri.go:89] found id: "a0bc6f0cc193d54e7a8a7dd22a38ca0e4d4f61bb51f322d3f41dac47db13c95b"
	I1210 00:16:45.768851  103771 cri.go:89] found id: "4c87cad753cfce5b05fd5987342e13dea86a36bc44f9c4b3a934dd48d2329af3"
	I1210 00:16:45.768853  103771 cri.go:89] found id: "d7ce0ccc8b2285ae9861ca675ddee1c7cc4b2eb95f1fc9b3c252e8b7f70e57e2"
	I1210 00:16:45.768855  103771 cri.go:89] found id: "2c832ea7354c3849fb453286649b272a5fcf355d47187efa465c13d6fc4d65dd"
	I1210 00:16:45.768860  103771 cri.go:89] found id: "a1ad93591d94d418f536035e0dbd58787d7e5c96ef2645619b7bf1fdd88df33c"
	I1210 00:16:45.768863  103771 cri.go:89] found id: "1482c9caeda45e0518bea419e7de0b9d7ea7016563bc8769f4f33151afc52fca"
	I1210 00:16:45.768875  103771 cri.go:89] found id: "3cc792ca2c2098e4d3b71b355fb33ce358f1d7de4b2454c236712b6102bdaa06"
	I1210 00:16:45.768881  103771 cri.go:89] found id: "d06c286b00c118c3e50e8da5c902bb87ed0d31b055437ffb3e10bda025f3a64d"
	I1210 00:16:45.768884  103771 cri.go:89] found id: ""
	I1210 00:16:45.768917  103771 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-070032 -n ha-070032
helpers_test.go:261: (dbg) Run:  kubectl --context ha-070032 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (323.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-029725
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-029725
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-029725: exit status 82 (2m1.810019173s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-029725-m03"  ...
	* Stopping node "multinode-029725-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-029725" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-029725 --wait=true -v=8 --alsologtostderr
E1210 00:40:09.288938   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:40:47.491317   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-029725 --wait=true -v=8 --alsologtostderr: (3m19.046105666s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-029725
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-029725 -n multinode-029725
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-029725 logs -n 25: (1.856829273s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-029725 ssh -n                                                                 | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-029725 cp multinode-029725-m02:/home/docker/cp-test.txt                       | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4291806726/001/cp-test_multinode-029725-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-029725 ssh -n                                                                 | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-029725 cp multinode-029725-m02:/home/docker/cp-test.txt                       | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725:/home/docker/cp-test_multinode-029725-m02_multinode-029725.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-029725 ssh -n                                                                 | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-029725 ssh -n multinode-029725 sudo cat                                       | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | /home/docker/cp-test_multinode-029725-m02_multinode-029725.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-029725 cp multinode-029725-m02:/home/docker/cp-test.txt                       | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725-m03:/home/docker/cp-test_multinode-029725-m02_multinode-029725-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-029725 ssh -n                                                                 | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-029725 ssh -n multinode-029725-m03 sudo cat                                   | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | /home/docker/cp-test_multinode-029725-m02_multinode-029725-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-029725 cp testdata/cp-test.txt                                                | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-029725 ssh -n                                                                 | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-029725 cp multinode-029725-m03:/home/docker/cp-test.txt                       | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4291806726/001/cp-test_multinode-029725-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-029725 ssh -n                                                                 | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-029725 cp multinode-029725-m03:/home/docker/cp-test.txt                       | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725:/home/docker/cp-test_multinode-029725-m03_multinode-029725.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-029725 ssh -n                                                                 | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-029725 ssh -n multinode-029725 sudo cat                                       | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | /home/docker/cp-test_multinode-029725-m03_multinode-029725.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-029725 cp multinode-029725-m03:/home/docker/cp-test.txt                       | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725-m02:/home/docker/cp-test_multinode-029725-m03_multinode-029725-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-029725 ssh -n                                                                 | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-029725 ssh -n multinode-029725-m02 sudo cat                                   | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | /home/docker/cp-test_multinode-029725-m03_multinode-029725-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-029725 node stop m03                                                          | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	| node    | multinode-029725 node start                                                             | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:36 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-029725                                                                | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:36 UTC |                     |
	| stop    | -p multinode-029725                                                                     | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:36 UTC |                     |
	| start   | -p multinode-029725                                                                     | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:38 UTC | 10 Dec 24 00:41 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-029725                                                                | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:41 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/10 00:38:14
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 00:38:14.190179  115982 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:38:14.190289  115982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:38:14.190298  115982 out.go:358] Setting ErrFile to fd 2...
	I1210 00:38:14.190302  115982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:38:14.190498  115982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 00:38:14.191028  115982 out.go:352] Setting JSON to false
	I1210 00:38:14.191870  115982 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":8445,"bootTime":1733782649,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:38:14.191974  115982 start.go:139] virtualization: kvm guest
	I1210 00:38:14.194008  115982 out.go:177] * [multinode-029725] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 00:38:14.195576  115982 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 00:38:14.195570  115982 notify.go:220] Checking for updates...
	I1210 00:38:14.197009  115982 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:38:14.198170  115982 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:38:14.199201  115982 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:38:14.200232  115982 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:38:14.201383  115982 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:38:14.203476  115982 config.go:182] Loaded profile config "multinode-029725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:38:14.203575  115982 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:38:14.204032  115982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:38:14.204094  115982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:38:14.219050  115982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42013
	I1210 00:38:14.219530  115982 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:38:14.220090  115982 main.go:141] libmachine: Using API Version  1
	I1210 00:38:14.220109  115982 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:38:14.220455  115982 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:38:14.220641  115982 main.go:141] libmachine: (multinode-029725) Calling .DriverName
	I1210 00:38:14.254760  115982 out.go:177] * Using the kvm2 driver based on existing profile
	I1210 00:38:14.255876  115982 start.go:297] selected driver: kvm2
	I1210 00:38:14.255886  115982 start.go:901] validating driver "kvm2" against &{Name:multinode-029725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-029725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.5 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fa
lse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:38:14.256023  115982 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:38:14.256323  115982 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:38:14.256394  115982 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 00:38:14.270620  115982 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1210 00:38:14.271282  115982 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:38:14.271333  115982 cni.go:84] Creating CNI manager for ""
	I1210 00:38:14.271395  115982 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1210 00:38:14.271453  115982 start.go:340] cluster config:
	{Name:multinode-029725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-029725 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.5 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner
:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:38:14.271572  115982 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:38:14.273140  115982 out.go:177] * Starting "multinode-029725" primary control-plane node in "multinode-029725" cluster
	I1210 00:38:14.274391  115982 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:38:14.274427  115982 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1210 00:38:14.274437  115982 cache.go:56] Caching tarball of preloaded images
	I1210 00:38:14.274511  115982 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 00:38:14.274521  115982 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 00:38:14.274657  115982 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725/config.json ...
	I1210 00:38:14.274865  115982 start.go:360] acquireMachinesLock for multinode-029725: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:38:14.274910  115982 start.go:364] duration metric: took 26.103µs to acquireMachinesLock for "multinode-029725"
	I1210 00:38:14.274924  115982 start.go:96] Skipping create...Using existing machine configuration
	I1210 00:38:14.274932  115982 fix.go:54] fixHost starting: 
	I1210 00:38:14.275174  115982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:38:14.275203  115982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:38:14.288820  115982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34753
	I1210 00:38:14.289259  115982 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:38:14.289653  115982 main.go:141] libmachine: Using API Version  1
	I1210 00:38:14.289677  115982 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:38:14.290032  115982 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:38:14.290207  115982 main.go:141] libmachine: (multinode-029725) Calling .DriverName
	I1210 00:38:14.290353  115982 main.go:141] libmachine: (multinode-029725) Calling .GetState
	I1210 00:38:14.291871  115982 fix.go:112] recreateIfNeeded on multinode-029725: state=Running err=<nil>
	W1210 00:38:14.291901  115982 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 00:38:14.293691  115982 out.go:177] * Updating the running kvm2 "multinode-029725" VM ...
	I1210 00:38:14.294875  115982 machine.go:93] provisionDockerMachine start ...
	I1210 00:38:14.294893  115982 main.go:141] libmachine: (multinode-029725) Calling .DriverName
	I1210 00:38:14.295081  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHHostname
	I1210 00:38:14.297772  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.298256  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:38:14.298296  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.298488  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHPort
	I1210 00:38:14.298697  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:38:14.298849  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:38:14.298954  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHUsername
	I1210 00:38:14.299070  115982 main.go:141] libmachine: Using SSH client type: native
	I1210 00:38:14.299256  115982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1210 00:38:14.299266  115982 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 00:38:14.403279  115982 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-029725
	
	I1210 00:38:14.403307  115982 main.go:141] libmachine: (multinode-029725) Calling .GetMachineName
	I1210 00:38:14.403534  115982 buildroot.go:166] provisioning hostname "multinode-029725"
	I1210 00:38:14.403553  115982 main.go:141] libmachine: (multinode-029725) Calling .GetMachineName
	I1210 00:38:14.403724  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHHostname
	I1210 00:38:14.406066  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.406409  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:38:14.406435  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.406593  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHPort
	I1210 00:38:14.406748  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:38:14.406878  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:38:14.406984  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHUsername
	I1210 00:38:14.407102  115982 main.go:141] libmachine: Using SSH client type: native
	I1210 00:38:14.407286  115982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1210 00:38:14.407298  115982 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-029725 && echo "multinode-029725" | sudo tee /etc/hostname
	I1210 00:38:14.527246  115982 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-029725
	
	I1210 00:38:14.527279  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHHostname
	I1210 00:38:14.530053  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.530426  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:38:14.530469  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.530617  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHPort
	I1210 00:38:14.530801  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:38:14.530963  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:38:14.531093  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHUsername
	I1210 00:38:14.531249  115982 main.go:141] libmachine: Using SSH client type: native
	I1210 00:38:14.531407  115982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1210 00:38:14.531425  115982 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-029725' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-029725/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-029725' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 00:38:14.638030  115982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:38:14.638069  115982 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 00:38:14.638097  115982 buildroot.go:174] setting up certificates
	I1210 00:38:14.638117  115982 provision.go:84] configureAuth start
	I1210 00:38:14.638136  115982 main.go:141] libmachine: (multinode-029725) Calling .GetMachineName
	I1210 00:38:14.638429  115982 main.go:141] libmachine: (multinode-029725) Calling .GetIP
	I1210 00:38:14.641174  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.641530  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:38:14.641555  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.641702  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHHostname
	I1210 00:38:14.643918  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.644269  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:38:14.644304  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.644460  115982 provision.go:143] copyHostCerts
	I1210 00:38:14.644499  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:38:14.644536  115982 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 00:38:14.644553  115982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:38:14.644617  115982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 00:38:14.644703  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:38:14.644724  115982 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 00:38:14.644731  115982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:38:14.644756  115982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 00:38:14.644812  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:38:14.644833  115982 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 00:38:14.644839  115982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:38:14.644861  115982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 00:38:14.644920  115982 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.multinode-029725 san=[127.0.0.1 192.168.39.24 localhost minikube multinode-029725]
	I1210 00:38:14.693389  115982 provision.go:177] copyRemoteCerts
	I1210 00:38:14.693434  115982 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 00:38:14.693454  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHHostname
	I1210 00:38:14.695835  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.696134  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:38:14.696165  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.696278  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHPort
	I1210 00:38:14.696428  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:38:14.696602  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHUsername
	I1210 00:38:14.696707  115982 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/multinode-029725/id_rsa Username:docker}
	I1210 00:38:14.776128  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 00:38:14.776200  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 00:38:14.798594  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 00:38:14.798636  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 00:38:14.824214  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 00:38:14.824275  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1210 00:38:14.846865  115982 provision.go:87] duration metric: took 208.732774ms to configureAuth
	I1210 00:38:14.846886  115982 buildroot.go:189] setting minikube options for container-runtime
	I1210 00:38:14.847099  115982 config.go:182] Loaded profile config "multinode-029725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:38:14.847176  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHHostname
	I1210 00:38:14.849833  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.850161  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:38:14.850189  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.850429  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHPort
	I1210 00:38:14.850628  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:38:14.850807  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:38:14.850930  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHUsername
	I1210 00:38:14.851082  115982 main.go:141] libmachine: Using SSH client type: native
	I1210 00:38:14.851287  115982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1210 00:38:14.851303  115982 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 00:39:45.474499  115982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 00:39:45.474529  115982 machine.go:96] duration metric: took 1m31.179639995s to provisionDockerMachine
	I1210 00:39:45.474546  115982 start.go:293] postStartSetup for "multinode-029725" (driver="kvm2")
	I1210 00:39:45.474578  115982 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 00:39:45.474606  115982 main.go:141] libmachine: (multinode-029725) Calling .DriverName
	I1210 00:39:45.475048  115982 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 00:39:45.475086  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHHostname
	I1210 00:39:45.477988  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:39:45.478420  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:39:45.478445  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:39:45.478644  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHPort
	I1210 00:39:45.478851  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:39:45.479019  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHUsername
	I1210 00:39:45.479168  115982 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/multinode-029725/id_rsa Username:docker}
	I1210 00:39:45.561763  115982 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 00:39:45.565214  115982 command_runner.go:130] > NAME=Buildroot
	I1210 00:39:45.565232  115982 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1210 00:39:45.565251  115982 command_runner.go:130] > ID=buildroot
	I1210 00:39:45.565259  115982 command_runner.go:130] > VERSION_ID=2023.02.9
	I1210 00:39:45.565270  115982 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1210 00:39:45.565358  115982 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 00:39:45.565381  115982 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 00:39:45.565458  115982 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 00:39:45.565565  115982 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 00:39:45.565580  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /etc/ssl/certs/862962.pem
	I1210 00:39:45.565713  115982 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 00:39:45.573890  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:39:45.595346  115982 start.go:296] duration metric: took 120.786681ms for postStartSetup
	I1210 00:39:45.595397  115982 fix.go:56] duration metric: took 1m31.320463472s for fixHost
	I1210 00:39:45.595423  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHHostname
	I1210 00:39:45.597962  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:39:45.598320  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:39:45.598346  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:39:45.598507  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHPort
	I1210 00:39:45.598674  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:39:45.598839  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:39:45.598955  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHUsername
	I1210 00:39:45.599093  115982 main.go:141] libmachine: Using SSH client type: native
	I1210 00:39:45.599308  115982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1210 00:39:45.599323  115982 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 00:39:45.698410  115982 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733791185.674166324
	
	I1210 00:39:45.698435  115982 fix.go:216] guest clock: 1733791185.674166324
	I1210 00:39:45.698445  115982 fix.go:229] Guest: 2024-12-10 00:39:45.674166324 +0000 UTC Remote: 2024-12-10 00:39:45.595403119 +0000 UTC m=+91.444659181 (delta=78.763205ms)
	I1210 00:39:45.698493  115982 fix.go:200] guest clock delta is within tolerance: 78.763205ms
	I1210 00:39:45.698506  115982 start.go:83] releasing machines lock for "multinode-029725", held for 1m31.423586478s
	I1210 00:39:45.698533  115982 main.go:141] libmachine: (multinode-029725) Calling .DriverName
	I1210 00:39:45.698818  115982 main.go:141] libmachine: (multinode-029725) Calling .GetIP
	I1210 00:39:45.701390  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:39:45.701741  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:39:45.701769  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:39:45.701941  115982 main.go:141] libmachine: (multinode-029725) Calling .DriverName
	I1210 00:39:45.702425  115982 main.go:141] libmachine: (multinode-029725) Calling .DriverName
	I1210 00:39:45.702617  115982 main.go:141] libmachine: (multinode-029725) Calling .DriverName
	I1210 00:39:45.702703  115982 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 00:39:45.702757  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHHostname
	I1210 00:39:45.702862  115982 ssh_runner.go:195] Run: cat /version.json
	I1210 00:39:45.702887  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHHostname
	I1210 00:39:45.705191  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:39:45.705499  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:39:45.705526  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:39:45.705594  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:39:45.705696  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHPort
	I1210 00:39:45.705889  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:39:45.706062  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHUsername
	I1210 00:39:45.706080  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:39:45.706114  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:39:45.706218  115982 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/multinode-029725/id_rsa Username:docker}
	I1210 00:39:45.706281  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHPort
	I1210 00:39:45.706436  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:39:45.706614  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHUsername
	I1210 00:39:45.706747  115982 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/multinode-029725/id_rsa Username:docker}
	I1210 00:39:45.782484  115982 command_runner.go:130] > {"iso_version": "v1.34.0-1730913550-19917", "kicbase_version": "v0.0.45-1730888964-19917", "minikube_version": "v1.34.0", "commit": "72f43dde5d92c8ae490d0727dad53fb3ed6aa41e"}
	I1210 00:39:45.782855  115982 ssh_runner.go:195] Run: systemctl --version
	I1210 00:39:45.802553  115982 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1210 00:39:45.802605  115982 command_runner.go:130] > systemd 252 (252)
	I1210 00:39:45.802622  115982 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1210 00:39:45.802681  115982 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 00:39:45.966307  115982 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 00:39:45.972133  115982 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 00:39:45.972256  115982 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 00:39:45.972326  115982 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 00:39:45.981373  115982 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 00:39:45.981394  115982 start.go:495] detecting cgroup driver to use...
	I1210 00:39:45.981484  115982 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 00:39:45.998843  115982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 00:39:46.013150  115982 docker.go:217] disabling cri-docker service (if available) ...
	I1210 00:39:46.013215  115982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 00:39:46.027740  115982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 00:39:46.042073  115982 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 00:39:46.196596  115982 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 00:39:46.327771  115982 docker.go:233] disabling docker service ...
	I1210 00:39:46.327841  115982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 00:39:46.344809  115982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 00:39:46.357520  115982 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 00:39:46.489238  115982 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 00:39:46.623593  115982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 00:39:46.636063  115982 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 00:39:46.653089  115982 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1210 00:39:46.653512  115982 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 00:39:46.653574  115982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:39:46.663053  115982 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 00:39:46.663117  115982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:39:46.672299  115982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:39:46.681370  115982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:39:46.690423  115982 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 00:39:46.699716  115982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:39:46.708727  115982 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:39:46.718437  115982 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:39:46.727592  115982 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 00:39:46.735896  115982 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 00:39:46.735967  115982 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 00:39:46.743996  115982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:39:46.872593  115982 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 00:39:47.051539  115982 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 00:39:47.051623  115982 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 00:39:47.056412  115982 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1210 00:39:47.056437  115982 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 00:39:47.056466  115982 command_runner.go:130] > Device: 0,22	Inode: 1279        Links: 1
	I1210 00:39:47.056481  115982 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 00:39:47.056490  115982 command_runner.go:130] > Access: 2024-12-10 00:39:46.931179943 +0000
	I1210 00:39:47.056507  115982 command_runner.go:130] > Modify: 2024-12-10 00:39:46.931179943 +0000
	I1210 00:39:47.056516  115982 command_runner.go:130] > Change: 2024-12-10 00:39:46.931179943 +0000
	I1210 00:39:47.056524  115982 command_runner.go:130] >  Birth: -
	I1210 00:39:47.056780  115982 start.go:563] Will wait 60s for crictl version
	I1210 00:39:47.056830  115982 ssh_runner.go:195] Run: which crictl
	I1210 00:39:47.060330  115982 command_runner.go:130] > /usr/bin/crictl
	I1210 00:39:47.060403  115982 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:39:47.096766  115982 command_runner.go:130] > Version:  0.1.0
	I1210 00:39:47.096785  115982 command_runner.go:130] > RuntimeName:  cri-o
	I1210 00:39:47.096789  115982 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1210 00:39:47.096794  115982 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 00:39:47.096904  115982 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 00:39:47.096958  115982 ssh_runner.go:195] Run: crio --version
	I1210 00:39:47.121883  115982 command_runner.go:130] > crio version 1.29.1
	I1210 00:39:47.121898  115982 command_runner.go:130] > Version:        1.29.1
	I1210 00:39:47.121903  115982 command_runner.go:130] > GitCommit:      unknown
	I1210 00:39:47.121908  115982 command_runner.go:130] > GitCommitDate:  unknown
	I1210 00:39:47.121911  115982 command_runner.go:130] > GitTreeState:   clean
	I1210 00:39:47.121916  115982 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1210 00:39:47.121920  115982 command_runner.go:130] > GoVersion:      go1.21.6
	I1210 00:39:47.121924  115982 command_runner.go:130] > Compiler:       gc
	I1210 00:39:47.121930  115982 command_runner.go:130] > Platform:       linux/amd64
	I1210 00:39:47.121936  115982 command_runner.go:130] > Linkmode:       dynamic
	I1210 00:39:47.121943  115982 command_runner.go:130] > BuildTags:      
	I1210 00:39:47.121951  115982 command_runner.go:130] >   containers_image_ostree_stub
	I1210 00:39:47.121958  115982 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1210 00:39:47.121967  115982 command_runner.go:130] >   btrfs_noversion
	I1210 00:39:47.121972  115982 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1210 00:39:47.121986  115982 command_runner.go:130] >   libdm_no_deferred_remove
	I1210 00:39:47.121992  115982 command_runner.go:130] >   seccomp
	I1210 00:39:47.121997  115982 command_runner.go:130] > LDFlags:          unknown
	I1210 00:39:47.122001  115982 command_runner.go:130] > SeccompEnabled:   true
	I1210 00:39:47.122006  115982 command_runner.go:130] > AppArmorEnabled:  false
	I1210 00:39:47.122165  115982 ssh_runner.go:195] Run: crio --version
	I1210 00:39:47.149671  115982 command_runner.go:130] > crio version 1.29.1
	I1210 00:39:47.149697  115982 command_runner.go:130] > Version:        1.29.1
	I1210 00:39:47.149720  115982 command_runner.go:130] > GitCommit:      unknown
	I1210 00:39:47.149727  115982 command_runner.go:130] > GitCommitDate:  unknown
	I1210 00:39:47.149734  115982 command_runner.go:130] > GitTreeState:   clean
	I1210 00:39:47.149743  115982 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1210 00:39:47.149752  115982 command_runner.go:130] > GoVersion:      go1.21.6
	I1210 00:39:47.149756  115982 command_runner.go:130] > Compiler:       gc
	I1210 00:39:47.149761  115982 command_runner.go:130] > Platform:       linux/amd64
	I1210 00:39:47.149765  115982 command_runner.go:130] > Linkmode:       dynamic
	I1210 00:39:47.149771  115982 command_runner.go:130] > BuildTags:      
	I1210 00:39:47.149775  115982 command_runner.go:130] >   containers_image_ostree_stub
	I1210 00:39:47.149780  115982 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1210 00:39:47.149783  115982 command_runner.go:130] >   btrfs_noversion
	I1210 00:39:47.149788  115982 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1210 00:39:47.149795  115982 command_runner.go:130] >   libdm_no_deferred_remove
	I1210 00:39:47.149798  115982 command_runner.go:130] >   seccomp
	I1210 00:39:47.149803  115982 command_runner.go:130] > LDFlags:          unknown
	I1210 00:39:47.149807  115982 command_runner.go:130] > SeccompEnabled:   true
	I1210 00:39:47.149813  115982 command_runner.go:130] > AppArmorEnabled:  false
	I1210 00:39:47.151805  115982 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 00:39:47.153239  115982 main.go:141] libmachine: (multinode-029725) Calling .GetIP
	I1210 00:39:47.155974  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:39:47.156318  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:39:47.156340  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:39:47.156539  115982 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 00:39:47.160327  115982 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1210 00:39:47.160440  115982 kubeadm.go:883] updating cluster {Name:multinode-029725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-029725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.5 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 00:39:47.160610  115982 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:39:47.160665  115982 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:39:47.198321  115982 command_runner.go:130] > {
	I1210 00:39:47.198341  115982 command_runner.go:130] >   "images": [
	I1210 00:39:47.198346  115982 command_runner.go:130] >     {
	I1210 00:39:47.198354  115982 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1210 00:39:47.198358  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.198364  115982 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1210 00:39:47.198368  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198372  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.198380  115982 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1210 00:39:47.198387  115982 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1210 00:39:47.198390  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198402  115982 command_runner.go:130] >       "size": "94965812",
	I1210 00:39:47.198408  115982 command_runner.go:130] >       "uid": null,
	I1210 00:39:47.198417  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.198424  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.198432  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.198435  115982 command_runner.go:130] >     },
	I1210 00:39:47.198438  115982 command_runner.go:130] >     {
	I1210 00:39:47.198444  115982 command_runner.go:130] >       "id": "50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e",
	I1210 00:39:47.198451  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.198456  115982 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241108-5c6d2daf"
	I1210 00:39:47.198462  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198466  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.198474  115982 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3",
	I1210 00:39:47.198480  115982 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"
	I1210 00:39:47.198484  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198488  115982 command_runner.go:130] >       "size": "94963761",
	I1210 00:39:47.198492  115982 command_runner.go:130] >       "uid": null,
	I1210 00:39:47.198499  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.198503  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.198507  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.198513  115982 command_runner.go:130] >     },
	I1210 00:39:47.198516  115982 command_runner.go:130] >     {
	I1210 00:39:47.198522  115982 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1210 00:39:47.198527  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.198531  115982 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1210 00:39:47.198535  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198540  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.198547  115982 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1210 00:39:47.198555  115982 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1210 00:39:47.198571  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198575  115982 command_runner.go:130] >       "size": "1363676",
	I1210 00:39:47.198582  115982 command_runner.go:130] >       "uid": null,
	I1210 00:39:47.198585  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.198594  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.198598  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.198601  115982 command_runner.go:130] >     },
	I1210 00:39:47.198605  115982 command_runner.go:130] >     {
	I1210 00:39:47.198611  115982 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1210 00:39:47.198615  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.198622  115982 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 00:39:47.198626  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198630  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.198638  115982 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1210 00:39:47.198651  115982 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1210 00:39:47.198657  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198661  115982 command_runner.go:130] >       "size": "31470524",
	I1210 00:39:47.198664  115982 command_runner.go:130] >       "uid": null,
	I1210 00:39:47.198668  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.198679  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.198683  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.198687  115982 command_runner.go:130] >     },
	I1210 00:39:47.198690  115982 command_runner.go:130] >     {
	I1210 00:39:47.198696  115982 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1210 00:39:47.198702  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.198707  115982 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1210 00:39:47.198710  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198715  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.198724  115982 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1210 00:39:47.198731  115982 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1210 00:39:47.198737  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198740  115982 command_runner.go:130] >       "size": "63273227",
	I1210 00:39:47.198744  115982 command_runner.go:130] >       "uid": null,
	I1210 00:39:47.198748  115982 command_runner.go:130] >       "username": "nonroot",
	I1210 00:39:47.198751  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.198755  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.198758  115982 command_runner.go:130] >     },
	I1210 00:39:47.198768  115982 command_runner.go:130] >     {
	I1210 00:39:47.198777  115982 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1210 00:39:47.198781  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.198786  115982 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1210 00:39:47.198789  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198793  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.198799  115982 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1210 00:39:47.198806  115982 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1210 00:39:47.198809  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198813  115982 command_runner.go:130] >       "size": "149009664",
	I1210 00:39:47.198817  115982 command_runner.go:130] >       "uid": {
	I1210 00:39:47.198821  115982 command_runner.go:130] >         "value": "0"
	I1210 00:39:47.198825  115982 command_runner.go:130] >       },
	I1210 00:39:47.198828  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.198832  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.198836  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.198839  115982 command_runner.go:130] >     },
	I1210 00:39:47.198842  115982 command_runner.go:130] >     {
	I1210 00:39:47.198848  115982 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1210 00:39:47.198854  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.198859  115982 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1210 00:39:47.198862  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198866  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.198873  115982 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1210 00:39:47.198882  115982 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1210 00:39:47.198886  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198890  115982 command_runner.go:130] >       "size": "95274464",
	I1210 00:39:47.198894  115982 command_runner.go:130] >       "uid": {
	I1210 00:39:47.198898  115982 command_runner.go:130] >         "value": "0"
	I1210 00:39:47.198901  115982 command_runner.go:130] >       },
	I1210 00:39:47.198905  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.198911  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.198915  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.198923  115982 command_runner.go:130] >     },
	I1210 00:39:47.198929  115982 command_runner.go:130] >     {
	I1210 00:39:47.198934  115982 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1210 00:39:47.198941  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.198945  115982 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1210 00:39:47.198949  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198952  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.198972  115982 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1210 00:39:47.198986  115982 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1210 00:39:47.198989  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198993  115982 command_runner.go:130] >       "size": "89474374",
	I1210 00:39:47.198997  115982 command_runner.go:130] >       "uid": {
	I1210 00:39:47.199004  115982 command_runner.go:130] >         "value": "0"
	I1210 00:39:47.199007  115982 command_runner.go:130] >       },
	I1210 00:39:47.199010  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.199014  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.199017  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.199020  115982 command_runner.go:130] >     },
	I1210 00:39:47.199023  115982 command_runner.go:130] >     {
	I1210 00:39:47.199029  115982 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1210 00:39:47.199032  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.199037  115982 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1210 00:39:47.199040  115982 command_runner.go:130] >       ],
	I1210 00:39:47.199044  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.199050  115982 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1210 00:39:47.199057  115982 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1210 00:39:47.199060  115982 command_runner.go:130] >       ],
	I1210 00:39:47.199064  115982 command_runner.go:130] >       "size": "92783513",
	I1210 00:39:47.199068  115982 command_runner.go:130] >       "uid": null,
	I1210 00:39:47.199071  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.199074  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.199078  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.199080  115982 command_runner.go:130] >     },
	I1210 00:39:47.199088  115982 command_runner.go:130] >     {
	I1210 00:39:47.199094  115982 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1210 00:39:47.199097  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.199102  115982 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1210 00:39:47.199105  115982 command_runner.go:130] >       ],
	I1210 00:39:47.199109  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.199115  115982 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1210 00:39:47.199122  115982 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1210 00:39:47.199125  115982 command_runner.go:130] >       ],
	I1210 00:39:47.199128  115982 command_runner.go:130] >       "size": "68457798",
	I1210 00:39:47.199134  115982 command_runner.go:130] >       "uid": {
	I1210 00:39:47.199138  115982 command_runner.go:130] >         "value": "0"
	I1210 00:39:47.199141  115982 command_runner.go:130] >       },
	I1210 00:39:47.199145  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.199156  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.199160  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.199163  115982 command_runner.go:130] >     },
	I1210 00:39:47.199167  115982 command_runner.go:130] >     {
	I1210 00:39:47.199172  115982 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1210 00:39:47.199178  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.199182  115982 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1210 00:39:47.199186  115982 command_runner.go:130] >       ],
	I1210 00:39:47.199189  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.199198  115982 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1210 00:39:47.199206  115982 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1210 00:39:47.199210  115982 command_runner.go:130] >       ],
	I1210 00:39:47.199213  115982 command_runner.go:130] >       "size": "742080",
	I1210 00:39:47.199217  115982 command_runner.go:130] >       "uid": {
	I1210 00:39:47.199221  115982 command_runner.go:130] >         "value": "65535"
	I1210 00:39:47.199224  115982 command_runner.go:130] >       },
	I1210 00:39:47.199228  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.199232  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.199236  115982 command_runner.go:130] >       "pinned": true
	I1210 00:39:47.199246  115982 command_runner.go:130] >     }
	I1210 00:39:47.199249  115982 command_runner.go:130] >   ]
	I1210 00:39:47.199252  115982 command_runner.go:130] > }
	I1210 00:39:47.199867  115982 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 00:39:47.199883  115982 crio.go:433] Images already preloaded, skipping extraction
	I1210 00:39:47.199926  115982 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:39:47.227942  115982 command_runner.go:130] > {
	I1210 00:39:47.227960  115982 command_runner.go:130] >   "images": [
	I1210 00:39:47.227964  115982 command_runner.go:130] >     {
	I1210 00:39:47.227971  115982 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1210 00:39:47.227976  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.227982  115982 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1210 00:39:47.227985  115982 command_runner.go:130] >       ],
	I1210 00:39:47.227989  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.227997  115982 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1210 00:39:47.228004  115982 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1210 00:39:47.228007  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228011  115982 command_runner.go:130] >       "size": "94965812",
	I1210 00:39:47.228015  115982 command_runner.go:130] >       "uid": null,
	I1210 00:39:47.228019  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.228047  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.228060  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.228063  115982 command_runner.go:130] >     },
	I1210 00:39:47.228067  115982 command_runner.go:130] >     {
	I1210 00:39:47.228072  115982 command_runner.go:130] >       "id": "50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e",
	I1210 00:39:47.228076  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.228084  115982 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241108-5c6d2daf"
	I1210 00:39:47.228087  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228091  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.228097  115982 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3",
	I1210 00:39:47.228104  115982 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"
	I1210 00:39:47.228108  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228112  115982 command_runner.go:130] >       "size": "94963761",
	I1210 00:39:47.228118  115982 command_runner.go:130] >       "uid": null,
	I1210 00:39:47.228125  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.228135  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.228140  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.228143  115982 command_runner.go:130] >     },
	I1210 00:39:47.228155  115982 command_runner.go:130] >     {
	I1210 00:39:47.228164  115982 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1210 00:39:47.228168  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.228172  115982 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1210 00:39:47.228179  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228182  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.228189  115982 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1210 00:39:47.228195  115982 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1210 00:39:47.228199  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228203  115982 command_runner.go:130] >       "size": "1363676",
	I1210 00:39:47.228207  115982 command_runner.go:130] >       "uid": null,
	I1210 00:39:47.228213  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.228220  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.228224  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.228227  115982 command_runner.go:130] >     },
	I1210 00:39:47.228230  115982 command_runner.go:130] >     {
	I1210 00:39:47.228236  115982 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1210 00:39:47.228242  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.228247  115982 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 00:39:47.228250  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228253  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.228260  115982 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1210 00:39:47.228277  115982 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1210 00:39:47.228281  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228285  115982 command_runner.go:130] >       "size": "31470524",
	I1210 00:39:47.228288  115982 command_runner.go:130] >       "uid": null,
	I1210 00:39:47.228292  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.228295  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.228299  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.228303  115982 command_runner.go:130] >     },
	I1210 00:39:47.228311  115982 command_runner.go:130] >     {
	I1210 00:39:47.228319  115982 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1210 00:39:47.228322  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.228328  115982 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1210 00:39:47.228332  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228335  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.228342  115982 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1210 00:39:47.228349  115982 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1210 00:39:47.228353  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228357  115982 command_runner.go:130] >       "size": "63273227",
	I1210 00:39:47.228363  115982 command_runner.go:130] >       "uid": null,
	I1210 00:39:47.228367  115982 command_runner.go:130] >       "username": "nonroot",
	I1210 00:39:47.228371  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.228374  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.228378  115982 command_runner.go:130] >     },
	I1210 00:39:47.228389  115982 command_runner.go:130] >     {
	I1210 00:39:47.228395  115982 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1210 00:39:47.228400  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.228404  115982 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1210 00:39:47.228408  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228412  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.228420  115982 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1210 00:39:47.228429  115982 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1210 00:39:47.228433  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228437  115982 command_runner.go:130] >       "size": "149009664",
	I1210 00:39:47.228441  115982 command_runner.go:130] >       "uid": {
	I1210 00:39:47.228444  115982 command_runner.go:130] >         "value": "0"
	I1210 00:39:47.228450  115982 command_runner.go:130] >       },
	I1210 00:39:47.228456  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.228460  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.228464  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.228467  115982 command_runner.go:130] >     },
	I1210 00:39:47.228470  115982 command_runner.go:130] >     {
	I1210 00:39:47.228480  115982 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1210 00:39:47.228486  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.228491  115982 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1210 00:39:47.228497  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228500  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.228507  115982 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1210 00:39:47.228517  115982 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1210 00:39:47.228520  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228524  115982 command_runner.go:130] >       "size": "95274464",
	I1210 00:39:47.228527  115982 command_runner.go:130] >       "uid": {
	I1210 00:39:47.228531  115982 command_runner.go:130] >         "value": "0"
	I1210 00:39:47.228537  115982 command_runner.go:130] >       },
	I1210 00:39:47.228541  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.228545  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.228549  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.228552  115982 command_runner.go:130] >     },
	I1210 00:39:47.228555  115982 command_runner.go:130] >     {
	I1210 00:39:47.228561  115982 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1210 00:39:47.228567  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.228572  115982 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1210 00:39:47.228575  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228579  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.228598  115982 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1210 00:39:47.228608  115982 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1210 00:39:47.228612  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228615  115982 command_runner.go:130] >       "size": "89474374",
	I1210 00:39:47.228619  115982 command_runner.go:130] >       "uid": {
	I1210 00:39:47.228623  115982 command_runner.go:130] >         "value": "0"
	I1210 00:39:47.228626  115982 command_runner.go:130] >       },
	I1210 00:39:47.228630  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.228633  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.228636  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.228639  115982 command_runner.go:130] >     },
	I1210 00:39:47.228647  115982 command_runner.go:130] >     {
	I1210 00:39:47.228655  115982 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1210 00:39:47.228659  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.228665  115982 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1210 00:39:47.228669  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228676  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.228683  115982 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1210 00:39:47.228694  115982 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1210 00:39:47.228700  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228704  115982 command_runner.go:130] >       "size": "92783513",
	I1210 00:39:47.228708  115982 command_runner.go:130] >       "uid": null,
	I1210 00:39:47.228712  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.228716  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.228719  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.228723  115982 command_runner.go:130] >     },
	I1210 00:39:47.228726  115982 command_runner.go:130] >     {
	I1210 00:39:47.228731  115982 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1210 00:39:47.228737  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.228742  115982 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1210 00:39:47.228747  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228751  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.228758  115982 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1210 00:39:47.228767  115982 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1210 00:39:47.228770  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228774  115982 command_runner.go:130] >       "size": "68457798",
	I1210 00:39:47.228778  115982 command_runner.go:130] >       "uid": {
	I1210 00:39:47.228782  115982 command_runner.go:130] >         "value": "0"
	I1210 00:39:47.228785  115982 command_runner.go:130] >       },
	I1210 00:39:47.228789  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.228792  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.228796  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.228800  115982 command_runner.go:130] >     },
	I1210 00:39:47.228809  115982 command_runner.go:130] >     {
	I1210 00:39:47.228822  115982 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1210 00:39:47.228829  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.228833  115982 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1210 00:39:47.228839  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228842  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.228848  115982 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1210 00:39:47.228857  115982 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1210 00:39:47.228861  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228866  115982 command_runner.go:130] >       "size": "742080",
	I1210 00:39:47.228870  115982 command_runner.go:130] >       "uid": {
	I1210 00:39:47.228876  115982 command_runner.go:130] >         "value": "65535"
	I1210 00:39:47.228879  115982 command_runner.go:130] >       },
	I1210 00:39:47.228883  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.228889  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.228892  115982 command_runner.go:130] >       "pinned": true
	I1210 00:39:47.228896  115982 command_runner.go:130] >     }
	I1210 00:39:47.228899  115982 command_runner.go:130] >   ]
	I1210 00:39:47.228907  115982 command_runner.go:130] > }
	I1210 00:39:47.229462  115982 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 00:39:47.229477  115982 cache_images.go:84] Images are preloaded, skipping loading
	I1210 00:39:47.229486  115982 kubeadm.go:934] updating node { 192.168.39.24 8443 v1.31.2 crio true true} ...
	I1210 00:39:47.229598  115982 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-029725 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-029725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:39:47.229665  115982 ssh_runner.go:195] Run: crio config
	I1210 00:39:47.258833  115982 command_runner.go:130] ! time="2024-12-10 00:39:47.234608337Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1210 00:39:47.265305  115982 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1210 00:39:47.272805  115982 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1210 00:39:47.272833  115982 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1210 00:39:47.272844  115982 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1210 00:39:47.272849  115982 command_runner.go:130] > #
	I1210 00:39:47.272863  115982 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1210 00:39:47.272877  115982 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1210 00:39:47.272889  115982 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1210 00:39:47.272902  115982 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1210 00:39:47.272911  115982 command_runner.go:130] > # reload'.
	I1210 00:39:47.272935  115982 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1210 00:39:47.272948  115982 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1210 00:39:47.272959  115982 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1210 00:39:47.272971  115982 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1210 00:39:47.272978  115982 command_runner.go:130] > [crio]
	I1210 00:39:47.272987  115982 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1210 00:39:47.272998  115982 command_runner.go:130] > # containers images, in this directory.
	I1210 00:39:47.273008  115982 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1210 00:39:47.273024  115982 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1210 00:39:47.273034  115982 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1210 00:39:47.273049  115982 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1210 00:39:47.273058  115982 command_runner.go:130] > # imagestore = ""
	I1210 00:39:47.273068  115982 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1210 00:39:47.273083  115982 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1210 00:39:47.273091  115982 command_runner.go:130] > storage_driver = "overlay"
	I1210 00:39:47.273096  115982 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1210 00:39:47.273104  115982 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1210 00:39:47.273110  115982 command_runner.go:130] > storage_option = [
	I1210 00:39:47.273116  115982 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1210 00:39:47.273119  115982 command_runner.go:130] > ]
	I1210 00:39:47.273128  115982 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1210 00:39:47.273137  115982 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1210 00:39:47.273144  115982 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1210 00:39:47.273149  115982 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1210 00:39:47.273157  115982 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1210 00:39:47.273161  115982 command_runner.go:130] > # always happen on a node reboot
	I1210 00:39:47.273166  115982 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1210 00:39:47.273182  115982 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1210 00:39:47.273190  115982 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1210 00:39:47.273195  115982 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1210 00:39:47.273200  115982 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1210 00:39:47.273207  115982 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1210 00:39:47.273216  115982 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1210 00:39:47.273220  115982 command_runner.go:130] > # internal_wipe = true
	I1210 00:39:47.273228  115982 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1210 00:39:47.273241  115982 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1210 00:39:47.273248  115982 command_runner.go:130] > # internal_repair = false
	I1210 00:39:47.273253  115982 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1210 00:39:47.273262  115982 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1210 00:39:47.273267  115982 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1210 00:39:47.273274  115982 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1210 00:39:47.273281  115982 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1210 00:39:47.273285  115982 command_runner.go:130] > [crio.api]
	I1210 00:39:47.273293  115982 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1210 00:39:47.273297  115982 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1210 00:39:47.273305  115982 command_runner.go:130] > # IP address on which the stream server will listen.
	I1210 00:39:47.273313  115982 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1210 00:39:47.273322  115982 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1210 00:39:47.273327  115982 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1210 00:39:47.273333  115982 command_runner.go:130] > # stream_port = "0"
	I1210 00:39:47.273338  115982 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1210 00:39:47.273344  115982 command_runner.go:130] > # stream_enable_tls = false
	I1210 00:39:47.273349  115982 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1210 00:39:47.273356  115982 command_runner.go:130] > # stream_idle_timeout = ""
	I1210 00:39:47.273364  115982 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1210 00:39:47.273370  115982 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1210 00:39:47.273374  115982 command_runner.go:130] > # minutes.
	I1210 00:39:47.273382  115982 command_runner.go:130] > # stream_tls_cert = ""
	I1210 00:39:47.273390  115982 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1210 00:39:47.273396  115982 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1210 00:39:47.273402  115982 command_runner.go:130] > # stream_tls_key = ""
	I1210 00:39:47.273407  115982 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1210 00:39:47.273413  115982 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1210 00:39:47.273432  115982 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1210 00:39:47.273439  115982 command_runner.go:130] > # stream_tls_ca = ""
	I1210 00:39:47.273446  115982 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1210 00:39:47.273453  115982 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1210 00:39:47.273459  115982 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1210 00:39:47.273466  115982 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1210 00:39:47.273471  115982 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1210 00:39:47.273477  115982 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1210 00:39:47.273481  115982 command_runner.go:130] > [crio.runtime]
	I1210 00:39:47.273487  115982 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1210 00:39:47.273494  115982 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1210 00:39:47.273498  115982 command_runner.go:130] > # "nofile=1024:2048"
	I1210 00:39:47.273507  115982 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1210 00:39:47.273511  115982 command_runner.go:130] > # default_ulimits = [
	I1210 00:39:47.273517  115982 command_runner.go:130] > # ]
	I1210 00:39:47.273522  115982 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1210 00:39:47.273533  115982 command_runner.go:130] > # no_pivot = false
	I1210 00:39:47.273541  115982 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1210 00:39:47.273547  115982 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1210 00:39:47.273554  115982 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1210 00:39:47.273559  115982 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1210 00:39:47.273566  115982 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1210 00:39:47.273572  115982 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1210 00:39:47.273579  115982 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1210 00:39:47.273583  115982 command_runner.go:130] > # Cgroup setting for conmon
	I1210 00:39:47.273589  115982 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1210 00:39:47.273594  115982 command_runner.go:130] > conmon_cgroup = "pod"
	I1210 00:39:47.273600  115982 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1210 00:39:47.273607  115982 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1210 00:39:47.273616  115982 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1210 00:39:47.273622  115982 command_runner.go:130] > conmon_env = [
	I1210 00:39:47.273627  115982 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1210 00:39:47.273632  115982 command_runner.go:130] > ]
	I1210 00:39:47.273637  115982 command_runner.go:130] > # Additional environment variables to set for all the
	I1210 00:39:47.273642  115982 command_runner.go:130] > # containers. These are overridden if set in the
	I1210 00:39:47.273649  115982 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1210 00:39:47.273653  115982 command_runner.go:130] > # default_env = [
	I1210 00:39:47.273656  115982 command_runner.go:130] > # ]
	I1210 00:39:47.273662  115982 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1210 00:39:47.273669  115982 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1210 00:39:47.273673  115982 command_runner.go:130] > # selinux = false
	I1210 00:39:47.273679  115982 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1210 00:39:47.273687  115982 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1210 00:39:47.273692  115982 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1210 00:39:47.273698  115982 command_runner.go:130] > # seccomp_profile = ""
	I1210 00:39:47.273704  115982 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1210 00:39:47.273711  115982 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1210 00:39:47.273717  115982 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1210 00:39:47.273723  115982 command_runner.go:130] > # which might increase security.
	I1210 00:39:47.273732  115982 command_runner.go:130] > # This option is currently deprecated,
	I1210 00:39:47.273740  115982 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1210 00:39:47.273745  115982 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1210 00:39:47.273753  115982 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1210 00:39:47.273758  115982 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1210 00:39:47.273767  115982 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1210 00:39:47.273772  115982 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1210 00:39:47.273780  115982 command_runner.go:130] > # This option supports live configuration reload.
	I1210 00:39:47.273784  115982 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1210 00:39:47.273789  115982 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1210 00:39:47.273796  115982 command_runner.go:130] > # the cgroup blockio controller.
	I1210 00:39:47.273800  115982 command_runner.go:130] > # blockio_config_file = ""
	I1210 00:39:47.273806  115982 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1210 00:39:47.273812  115982 command_runner.go:130] > # blockio parameters.
	I1210 00:39:47.273816  115982 command_runner.go:130] > # blockio_reload = false
	I1210 00:39:47.273824  115982 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1210 00:39:47.273828  115982 command_runner.go:130] > # irqbalance daemon.
	I1210 00:39:47.273835  115982 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1210 00:39:47.273847  115982 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1210 00:39:47.273861  115982 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1210 00:39:47.273874  115982 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1210 00:39:47.273883  115982 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1210 00:39:47.273895  115982 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1210 00:39:47.273905  115982 command_runner.go:130] > # This option supports live configuration reload.
	I1210 00:39:47.273914  115982 command_runner.go:130] > # rdt_config_file = ""
	I1210 00:39:47.273922  115982 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1210 00:39:47.273930  115982 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1210 00:39:47.273962  115982 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1210 00:39:47.273970  115982 command_runner.go:130] > # separate_pull_cgroup = ""
	I1210 00:39:47.273975  115982 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1210 00:39:47.273981  115982 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1210 00:39:47.273985  115982 command_runner.go:130] > # will be added.
	I1210 00:39:47.273989  115982 command_runner.go:130] > # default_capabilities = [
	I1210 00:39:47.274000  115982 command_runner.go:130] > # 	"CHOWN",
	I1210 00:39:47.274004  115982 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1210 00:39:47.274008  115982 command_runner.go:130] > # 	"FSETID",
	I1210 00:39:47.274011  115982 command_runner.go:130] > # 	"FOWNER",
	I1210 00:39:47.274014  115982 command_runner.go:130] > # 	"SETGID",
	I1210 00:39:47.274017  115982 command_runner.go:130] > # 	"SETUID",
	I1210 00:39:47.274020  115982 command_runner.go:130] > # 	"SETPCAP",
	I1210 00:39:47.274024  115982 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1210 00:39:47.274027  115982 command_runner.go:130] > # 	"KILL",
	I1210 00:39:47.274030  115982 command_runner.go:130] > # ]
	I1210 00:39:47.274037  115982 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1210 00:39:47.274046  115982 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1210 00:39:47.274052  115982 command_runner.go:130] > # add_inheritable_capabilities = false
	I1210 00:39:47.274057  115982 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1210 00:39:47.274063  115982 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1210 00:39:47.274067  115982 command_runner.go:130] > default_sysctls = [
	I1210 00:39:47.274071  115982 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1210 00:39:47.274074  115982 command_runner.go:130] > ]
	I1210 00:39:47.274079  115982 command_runner.go:130] > # List of devices on the host that a
	I1210 00:39:47.274087  115982 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1210 00:39:47.274091  115982 command_runner.go:130] > # allowed_devices = [
	I1210 00:39:47.274094  115982 command_runner.go:130] > # 	"/dev/fuse",
	I1210 00:39:47.274097  115982 command_runner.go:130] > # ]
	I1210 00:39:47.274102  115982 command_runner.go:130] > # List of additional devices. specified as
	I1210 00:39:47.274109  115982 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1210 00:39:47.274114  115982 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1210 00:39:47.274121  115982 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1210 00:39:47.274128  115982 command_runner.go:130] > # additional_devices = [
	I1210 00:39:47.274133  115982 command_runner.go:130] > # ]
	I1210 00:39:47.274140  115982 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1210 00:39:47.274144  115982 command_runner.go:130] > # cdi_spec_dirs = [
	I1210 00:39:47.274147  115982 command_runner.go:130] > # 	"/etc/cdi",
	I1210 00:39:47.274151  115982 command_runner.go:130] > # 	"/var/run/cdi",
	I1210 00:39:47.274164  115982 command_runner.go:130] > # ]
	I1210 00:39:47.274173  115982 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1210 00:39:47.274178  115982 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1210 00:39:47.274183  115982 command_runner.go:130] > # Defaults to false.
	I1210 00:39:47.274188  115982 command_runner.go:130] > # device_ownership_from_security_context = false
	I1210 00:39:47.274199  115982 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1210 00:39:47.274207  115982 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1210 00:39:47.274211  115982 command_runner.go:130] > # hooks_dir = [
	I1210 00:39:47.274218  115982 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1210 00:39:47.274221  115982 command_runner.go:130] > # ]
	I1210 00:39:47.274227  115982 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1210 00:39:47.274235  115982 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1210 00:39:47.274240  115982 command_runner.go:130] > # its default mounts from the following two files:
	I1210 00:39:47.274246  115982 command_runner.go:130] > #
	I1210 00:39:47.274254  115982 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1210 00:39:47.274260  115982 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1210 00:39:47.274268  115982 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1210 00:39:47.274271  115982 command_runner.go:130] > #
	I1210 00:39:47.274276  115982 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1210 00:39:47.274285  115982 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1210 00:39:47.274291  115982 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1210 00:39:47.274295  115982 command_runner.go:130] > #      only add mounts it finds in this file.
	I1210 00:39:47.274301  115982 command_runner.go:130] > #
	I1210 00:39:47.274305  115982 command_runner.go:130] > # default_mounts_file = ""
	I1210 00:39:47.274310  115982 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1210 00:39:47.274319  115982 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1210 00:39:47.274322  115982 command_runner.go:130] > pids_limit = 1024
	I1210 00:39:47.274328  115982 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1210 00:39:47.274335  115982 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1210 00:39:47.274341  115982 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1210 00:39:47.274350  115982 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1210 00:39:47.274355  115982 command_runner.go:130] > # log_size_max = -1
	I1210 00:39:47.274362  115982 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1210 00:39:47.274376  115982 command_runner.go:130] > # log_to_journald = false
	I1210 00:39:47.274389  115982 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1210 00:39:47.274394  115982 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1210 00:39:47.274399  115982 command_runner.go:130] > # Path to directory for container attach sockets.
	I1210 00:39:47.274404  115982 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1210 00:39:47.274410  115982 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1210 00:39:47.274414  115982 command_runner.go:130] > # bind_mount_prefix = ""
	I1210 00:39:47.274420  115982 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1210 00:39:47.274426  115982 command_runner.go:130] > # read_only = false
	I1210 00:39:47.274431  115982 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1210 00:39:47.274440  115982 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1210 00:39:47.274443  115982 command_runner.go:130] > # live configuration reload.
	I1210 00:39:47.274447  115982 command_runner.go:130] > # log_level = "info"
	I1210 00:39:47.274452  115982 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1210 00:39:47.274459  115982 command_runner.go:130] > # This option supports live configuration reload.
	I1210 00:39:47.274463  115982 command_runner.go:130] > # log_filter = ""
	I1210 00:39:47.274471  115982 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1210 00:39:47.274477  115982 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1210 00:39:47.274481  115982 command_runner.go:130] > # separated by comma.
	I1210 00:39:47.274488  115982 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 00:39:47.274494  115982 command_runner.go:130] > # uid_mappings = ""
	I1210 00:39:47.274500  115982 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1210 00:39:47.274505  115982 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1210 00:39:47.274512  115982 command_runner.go:130] > # separated by comma.
	I1210 00:39:47.274518  115982 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 00:39:47.274524  115982 command_runner.go:130] > # gid_mappings = ""
	I1210 00:39:47.274530  115982 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1210 00:39:47.274538  115982 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1210 00:39:47.274543  115982 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1210 00:39:47.274553  115982 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 00:39:47.274557  115982 command_runner.go:130] > # minimum_mappable_uid = -1
	I1210 00:39:47.274578  115982 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1210 00:39:47.274591  115982 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1210 00:39:47.274603  115982 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1210 00:39:47.274613  115982 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 00:39:47.274619  115982 command_runner.go:130] > # minimum_mappable_gid = -1
	I1210 00:39:47.274627  115982 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1210 00:39:47.274634  115982 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1210 00:39:47.274641  115982 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1210 00:39:47.274645  115982 command_runner.go:130] > # ctr_stop_timeout = 30
	I1210 00:39:47.274652  115982 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1210 00:39:47.274662  115982 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1210 00:39:47.274669  115982 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1210 00:39:47.274674  115982 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1210 00:39:47.274680  115982 command_runner.go:130] > drop_infra_ctr = false
	I1210 00:39:47.274686  115982 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1210 00:39:47.274694  115982 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1210 00:39:47.274700  115982 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1210 00:39:47.274706  115982 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1210 00:39:47.274712  115982 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1210 00:39:47.274720  115982 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1210 00:39:47.274725  115982 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1210 00:39:47.274733  115982 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1210 00:39:47.274737  115982 command_runner.go:130] > # shared_cpuset = ""
	I1210 00:39:47.274745  115982 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1210 00:39:47.274749  115982 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1210 00:39:47.274756  115982 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1210 00:39:47.274763  115982 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1210 00:39:47.274769  115982 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1210 00:39:47.274774  115982 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1210 00:39:47.274783  115982 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1210 00:39:47.274787  115982 command_runner.go:130] > # enable_criu_support = false
	I1210 00:39:47.274794  115982 command_runner.go:130] > # Enable/disable the generation of the container,
	I1210 00:39:47.274800  115982 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1210 00:39:47.274804  115982 command_runner.go:130] > # enable_pod_events = false
	I1210 00:39:47.274810  115982 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1210 00:39:47.274823  115982 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1210 00:39:47.274830  115982 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1210 00:39:47.274835  115982 command_runner.go:130] > # default_runtime = "runc"
	I1210 00:39:47.274846  115982 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1210 00:39:47.274866  115982 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1210 00:39:47.274882  115982 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1210 00:39:47.274897  115982 command_runner.go:130] > # creation as a file is not desired either.
	I1210 00:39:47.274911  115982 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1210 00:39:47.274919  115982 command_runner.go:130] > # the hostname is being managed dynamically.
	I1210 00:39:47.274923  115982 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1210 00:39:47.274937  115982 command_runner.go:130] > # ]
	I1210 00:39:47.274946  115982 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1210 00:39:47.274952  115982 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1210 00:39:47.274958  115982 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1210 00:39:47.274963  115982 command_runner.go:130] > # Each entry in the table should follow the format:
	I1210 00:39:47.274968  115982 command_runner.go:130] > #
	I1210 00:39:47.274972  115982 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1210 00:39:47.274977  115982 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1210 00:39:47.275025  115982 command_runner.go:130] > # runtime_type = "oci"
	I1210 00:39:47.275032  115982 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1210 00:39:47.275037  115982 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1210 00:39:47.275042  115982 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1210 00:39:47.275049  115982 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1210 00:39:47.275053  115982 command_runner.go:130] > # monitor_env = []
	I1210 00:39:47.275060  115982 command_runner.go:130] > # privileged_without_host_devices = false
	I1210 00:39:47.275063  115982 command_runner.go:130] > # allowed_annotations = []
	I1210 00:39:47.275068  115982 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1210 00:39:47.275073  115982 command_runner.go:130] > # Where:
	I1210 00:39:47.275078  115982 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1210 00:39:47.275086  115982 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1210 00:39:47.275092  115982 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1210 00:39:47.275098  115982 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1210 00:39:47.275101  115982 command_runner.go:130] > #   in $PATH.
	I1210 00:39:47.275122  115982 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1210 00:39:47.275131  115982 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1210 00:39:47.275138  115982 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1210 00:39:47.275143  115982 command_runner.go:130] > #   state.
	I1210 00:39:47.275149  115982 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1210 00:39:47.275157  115982 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1210 00:39:47.275163  115982 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1210 00:39:47.275170  115982 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1210 00:39:47.275177  115982 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1210 00:39:47.275185  115982 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1210 00:39:47.275191  115982 command_runner.go:130] > #   The currently recognized values are:
	I1210 00:39:47.275199  115982 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1210 00:39:47.275206  115982 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1210 00:39:47.275213  115982 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1210 00:39:47.275219  115982 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1210 00:39:47.275228  115982 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1210 00:39:47.275234  115982 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1210 00:39:47.275242  115982 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1210 00:39:47.275248  115982 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1210 00:39:47.275256  115982 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1210 00:39:47.275263  115982 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1210 00:39:47.275270  115982 command_runner.go:130] > #   deprecated option "conmon".
	I1210 00:39:47.275276  115982 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1210 00:39:47.275282  115982 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1210 00:39:47.275290  115982 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1210 00:39:47.275295  115982 command_runner.go:130] > #   should be moved to the container's cgroup
	I1210 00:39:47.275304  115982 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1210 00:39:47.275311  115982 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1210 00:39:47.275317  115982 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1210 00:39:47.275325  115982 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1210 00:39:47.275328  115982 command_runner.go:130] > #
	I1210 00:39:47.275332  115982 command_runner.go:130] > # Using the seccomp notifier feature:
	I1210 00:39:47.275335  115982 command_runner.go:130] > #
	I1210 00:39:47.275346  115982 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1210 00:39:47.275354  115982 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1210 00:39:47.275357  115982 command_runner.go:130] > #
	I1210 00:39:47.275363  115982 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1210 00:39:47.275369  115982 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1210 00:39:47.275372  115982 command_runner.go:130] > #
	I1210 00:39:47.275382  115982 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1210 00:39:47.275388  115982 command_runner.go:130] > # feature.
	I1210 00:39:47.275391  115982 command_runner.go:130] > #
	I1210 00:39:47.275397  115982 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1210 00:39:47.275404  115982 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1210 00:39:47.275411  115982 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1210 00:39:47.275419  115982 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1210 00:39:47.275425  115982 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1210 00:39:47.275428  115982 command_runner.go:130] > #
	I1210 00:39:47.275434  115982 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1210 00:39:47.275442  115982 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1210 00:39:47.275445  115982 command_runner.go:130] > #
	I1210 00:39:47.275451  115982 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1210 00:39:47.275459  115982 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1210 00:39:47.275462  115982 command_runner.go:130] > #
	I1210 00:39:47.275467  115982 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1210 00:39:47.275475  115982 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1210 00:39:47.275479  115982 command_runner.go:130] > # limitation.
	I1210 00:39:47.275485  115982 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1210 00:39:47.275491  115982 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1210 00:39:47.275495  115982 command_runner.go:130] > runtime_type = "oci"
	I1210 00:39:47.275500  115982 command_runner.go:130] > runtime_root = "/run/runc"
	I1210 00:39:47.275504  115982 command_runner.go:130] > runtime_config_path = ""
	I1210 00:39:47.275509  115982 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1210 00:39:47.275514  115982 command_runner.go:130] > monitor_cgroup = "pod"
	I1210 00:39:47.275518  115982 command_runner.go:130] > monitor_exec_cgroup = ""
	I1210 00:39:47.275524  115982 command_runner.go:130] > monitor_env = [
	I1210 00:39:47.275539  115982 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1210 00:39:47.275545  115982 command_runner.go:130] > ]
	I1210 00:39:47.275552  115982 command_runner.go:130] > privileged_without_host_devices = false
	I1210 00:39:47.275560  115982 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1210 00:39:47.275566  115982 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1210 00:39:47.275573  115982 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1210 00:39:47.275580  115982 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1210 00:39:47.275589  115982 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1210 00:39:47.275597  115982 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1210 00:39:47.275605  115982 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1210 00:39:47.275614  115982 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1210 00:39:47.275620  115982 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1210 00:39:47.275626  115982 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1210 00:39:47.275629  115982 command_runner.go:130] > # Example:
	I1210 00:39:47.275633  115982 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1210 00:39:47.275637  115982 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1210 00:39:47.275644  115982 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1210 00:39:47.275649  115982 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1210 00:39:47.275652  115982 command_runner.go:130] > # cpuset = 0
	I1210 00:39:47.275656  115982 command_runner.go:130] > # cpushares = "0-1"
	I1210 00:39:47.275659  115982 command_runner.go:130] > # Where:
	I1210 00:39:47.275663  115982 command_runner.go:130] > # The workload name is workload-type.
	I1210 00:39:47.275669  115982 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1210 00:39:47.275674  115982 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1210 00:39:47.275678  115982 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1210 00:39:47.275686  115982 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1210 00:39:47.275691  115982 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1210 00:39:47.275695  115982 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1210 00:39:47.275701  115982 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1210 00:39:47.275705  115982 command_runner.go:130] > # Default value is set to true
	I1210 00:39:47.275709  115982 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1210 00:39:47.275714  115982 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1210 00:39:47.275719  115982 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1210 00:39:47.275729  115982 command_runner.go:130] > # Default value is set to 'false'
	I1210 00:39:47.275733  115982 command_runner.go:130] > # disable_hostport_mapping = false
	I1210 00:39:47.275738  115982 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1210 00:39:47.275741  115982 command_runner.go:130] > #
	I1210 00:39:47.275746  115982 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1210 00:39:47.275751  115982 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1210 00:39:47.275757  115982 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1210 00:39:47.275762  115982 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1210 00:39:47.275767  115982 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1210 00:39:47.275770  115982 command_runner.go:130] > [crio.image]
	I1210 00:39:47.275775  115982 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1210 00:39:47.275779  115982 command_runner.go:130] > # default_transport = "docker://"
	I1210 00:39:47.275784  115982 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1210 00:39:47.275790  115982 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1210 00:39:47.275794  115982 command_runner.go:130] > # global_auth_file = ""
	I1210 00:39:47.275798  115982 command_runner.go:130] > # The image used to instantiate infra containers.
	I1210 00:39:47.275805  115982 command_runner.go:130] > # This option supports live configuration reload.
	I1210 00:39:47.275809  115982 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1210 00:39:47.275815  115982 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1210 00:39:47.275820  115982 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1210 00:39:47.275824  115982 command_runner.go:130] > # This option supports live configuration reload.
	I1210 00:39:47.275831  115982 command_runner.go:130] > # pause_image_auth_file = ""
	I1210 00:39:47.275842  115982 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1210 00:39:47.275851  115982 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1210 00:39:47.275863  115982 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1210 00:39:47.275875  115982 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1210 00:39:47.275885  115982 command_runner.go:130] > # pause_command = "/pause"
	I1210 00:39:47.275894  115982 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1210 00:39:47.275906  115982 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1210 00:39:47.275918  115982 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1210 00:39:47.275928  115982 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1210 00:39:47.275936  115982 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1210 00:39:47.275942  115982 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1210 00:39:47.275955  115982 command_runner.go:130] > # pinned_images = [
	I1210 00:39:47.275960  115982 command_runner.go:130] > # ]
	I1210 00:39:47.275966  115982 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1210 00:39:47.275973  115982 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1210 00:39:47.275978  115982 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1210 00:39:47.275986  115982 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1210 00:39:47.275991  115982 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1210 00:39:47.275997  115982 command_runner.go:130] > # signature_policy = ""
	I1210 00:39:47.276002  115982 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1210 00:39:47.276009  115982 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1210 00:39:47.276016  115982 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1210 00:39:47.276022  115982 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1210 00:39:47.276027  115982 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1210 00:39:47.276034  115982 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1210 00:39:47.276040  115982 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1210 00:39:47.276048  115982 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1210 00:39:47.276052  115982 command_runner.go:130] > # changing them here.
	I1210 00:39:47.276056  115982 command_runner.go:130] > # insecure_registries = [
	I1210 00:39:47.276059  115982 command_runner.go:130] > # ]
	I1210 00:39:47.276065  115982 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1210 00:39:47.276070  115982 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1210 00:39:47.276075  115982 command_runner.go:130] > # image_volumes = "mkdir"
	I1210 00:39:47.276081  115982 command_runner.go:130] > # Temporary directory to use for storing big files
	I1210 00:39:47.276085  115982 command_runner.go:130] > # big_files_temporary_dir = ""
	I1210 00:39:47.276093  115982 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1210 00:39:47.276099  115982 command_runner.go:130] > # CNI plugins.
	I1210 00:39:47.276102  115982 command_runner.go:130] > [crio.network]
	I1210 00:39:47.276108  115982 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1210 00:39:47.276113  115982 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1210 00:39:47.276118  115982 command_runner.go:130] > # cni_default_network = ""
	I1210 00:39:47.276124  115982 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1210 00:39:47.276132  115982 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1210 00:39:47.276139  115982 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1210 00:39:47.276147  115982 command_runner.go:130] > # plugin_dirs = [
	I1210 00:39:47.276154  115982 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1210 00:39:47.276157  115982 command_runner.go:130] > # ]
	I1210 00:39:47.276165  115982 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1210 00:39:47.276172  115982 command_runner.go:130] > [crio.metrics]
	I1210 00:39:47.276176  115982 command_runner.go:130] > # Globally enable or disable metrics support.
	I1210 00:39:47.276180  115982 command_runner.go:130] > enable_metrics = true
	I1210 00:39:47.276184  115982 command_runner.go:130] > # Specify enabled metrics collectors.
	I1210 00:39:47.276191  115982 command_runner.go:130] > # Per default all metrics are enabled.
	I1210 00:39:47.276197  115982 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1210 00:39:47.276205  115982 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1210 00:39:47.276210  115982 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1210 00:39:47.276216  115982 command_runner.go:130] > # metrics_collectors = [
	I1210 00:39:47.276220  115982 command_runner.go:130] > # 	"operations",
	I1210 00:39:47.276224  115982 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1210 00:39:47.276229  115982 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1210 00:39:47.276233  115982 command_runner.go:130] > # 	"operations_errors",
	I1210 00:39:47.276238  115982 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1210 00:39:47.276243  115982 command_runner.go:130] > # 	"image_pulls_by_name",
	I1210 00:39:47.276248  115982 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1210 00:39:47.276254  115982 command_runner.go:130] > # 	"image_pulls_failures",
	I1210 00:39:47.276258  115982 command_runner.go:130] > # 	"image_pulls_successes",
	I1210 00:39:47.276262  115982 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1210 00:39:47.276268  115982 command_runner.go:130] > # 	"image_layer_reuse",
	I1210 00:39:47.276273  115982 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1210 00:39:47.276283  115982 command_runner.go:130] > # 	"containers_oom_total",
	I1210 00:39:47.276287  115982 command_runner.go:130] > # 	"containers_oom",
	I1210 00:39:47.276291  115982 command_runner.go:130] > # 	"processes_defunct",
	I1210 00:39:47.276295  115982 command_runner.go:130] > # 	"operations_total",
	I1210 00:39:47.276303  115982 command_runner.go:130] > # 	"operations_latency_seconds",
	I1210 00:39:47.276310  115982 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1210 00:39:47.276314  115982 command_runner.go:130] > # 	"operations_errors_total",
	I1210 00:39:47.276320  115982 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1210 00:39:47.276329  115982 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1210 00:39:47.276336  115982 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1210 00:39:47.276340  115982 command_runner.go:130] > # 	"image_pulls_success_total",
	I1210 00:39:47.276348  115982 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1210 00:39:47.276352  115982 command_runner.go:130] > # 	"containers_oom_count_total",
	I1210 00:39:47.276359  115982 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1210 00:39:47.276363  115982 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1210 00:39:47.276366  115982 command_runner.go:130] > # ]
	I1210 00:39:47.276371  115982 command_runner.go:130] > # The port on which the metrics server will listen.
	I1210 00:39:47.276377  115982 command_runner.go:130] > # metrics_port = 9090
	I1210 00:39:47.276385  115982 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1210 00:39:47.276391  115982 command_runner.go:130] > # metrics_socket = ""
	I1210 00:39:47.276399  115982 command_runner.go:130] > # The certificate for the secure metrics server.
	I1210 00:39:47.276407  115982 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1210 00:39:47.276413  115982 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1210 00:39:47.276420  115982 command_runner.go:130] > # certificate on any modification event.
	I1210 00:39:47.276424  115982 command_runner.go:130] > # metrics_cert = ""
	I1210 00:39:47.276432  115982 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1210 00:39:47.276436  115982 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1210 00:39:47.276443  115982 command_runner.go:130] > # metrics_key = ""
	I1210 00:39:47.276448  115982 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1210 00:39:47.276451  115982 command_runner.go:130] > [crio.tracing]
	I1210 00:39:47.276457  115982 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1210 00:39:47.276462  115982 command_runner.go:130] > # enable_tracing = false
	I1210 00:39:47.276467  115982 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1210 00:39:47.276474  115982 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1210 00:39:47.276481  115982 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1210 00:39:47.276488  115982 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1210 00:39:47.276492  115982 command_runner.go:130] > # CRI-O NRI configuration.
	I1210 00:39:47.276496  115982 command_runner.go:130] > [crio.nri]
	I1210 00:39:47.276500  115982 command_runner.go:130] > # Globally enable or disable NRI.
	I1210 00:39:47.276512  115982 command_runner.go:130] > # enable_nri = false
	I1210 00:39:47.276519  115982 command_runner.go:130] > # NRI socket to listen on.
	I1210 00:39:47.276528  115982 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1210 00:39:47.276534  115982 command_runner.go:130] > # NRI plugin directory to use.
	I1210 00:39:47.276539  115982 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1210 00:39:47.276543  115982 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1210 00:39:47.276550  115982 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1210 00:39:47.276555  115982 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1210 00:39:47.276561  115982 command_runner.go:130] > # nri_disable_connections = false
	I1210 00:39:47.276565  115982 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1210 00:39:47.276569  115982 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1210 00:39:47.276576  115982 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1210 00:39:47.276581  115982 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1210 00:39:47.276586  115982 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1210 00:39:47.276590  115982 command_runner.go:130] > [crio.stats]
	I1210 00:39:47.276597  115982 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1210 00:39:47.276605  115982 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1210 00:39:47.276609  115982 command_runner.go:130] > # stats_collection_period = 0
	I1210 00:39:47.276694  115982 cni.go:84] Creating CNI manager for ""
	I1210 00:39:47.276705  115982 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1210 00:39:47.276714  115982 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 00:39:47.276740  115982 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.24 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-029725 NodeName:multinode-029725 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 00:39:47.276884  115982 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-029725"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.24"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.24"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 00:39:47.276962  115982 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:39:47.287127  115982 command_runner.go:130] > kubeadm
	I1210 00:39:47.287148  115982 command_runner.go:130] > kubectl
	I1210 00:39:47.287154  115982 command_runner.go:130] > kubelet
	I1210 00:39:47.287183  115982 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 00:39:47.287244  115982 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 00:39:47.296503  115982 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1210 00:39:47.311915  115982 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:39:47.327316  115982 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1210 00:39:47.342248  115982 ssh_runner.go:195] Run: grep 192.168.39.24	control-plane.minikube.internal$ /etc/hosts
	I1210 00:39:47.345683  115982 command_runner.go:130] > 192.168.39.24	control-plane.minikube.internal
	I1210 00:39:47.345746  115982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:39:47.483796  115982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:39:47.498651  115982 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725 for IP: 192.168.39.24
	I1210 00:39:47.498677  115982 certs.go:194] generating shared ca certs ...
	I1210 00:39:47.498698  115982 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:39:47.498883  115982 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 00:39:47.498951  115982 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 00:39:47.498966  115982 certs.go:256] generating profile certs ...
	I1210 00:39:47.499091  115982 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725/client.key
	I1210 00:39:47.499180  115982 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725/apiserver.key.e615d136
	I1210 00:39:47.499236  115982 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725/proxy-client.key
	I1210 00:39:47.499250  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 00:39:47.499266  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 00:39:47.499283  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 00:39:47.499312  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 00:39:47.499338  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 00:39:47.499355  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 00:39:47.499373  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 00:39:47.499398  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 00:39:47.499457  115982 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 00:39:47.499501  115982 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 00:39:47.499515  115982 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:39:47.499545  115982 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 00:39:47.499576  115982 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:39:47.499605  115982 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 00:39:47.500209  115982 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:39:47.500291  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem -> /usr/share/ca-certificates/86296.pem
	I1210 00:39:47.500321  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /usr/share/ca-certificates/862962.pem
	I1210 00:39:47.500339  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:39:47.501979  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:39:47.524820  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 00:39:47.546799  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:39:47.567917  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:39:47.589155  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 00:39:47.609850  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 00:39:47.632246  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:39:47.654556  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:39:47.676321  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 00:39:47.697872  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 00:39:47.719353  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:39:47.740582  115982 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 00:39:47.755240  115982 ssh_runner.go:195] Run: openssl version
	I1210 00:39:47.760463  115982 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1210 00:39:47.760545  115982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 00:39:47.769963  115982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 00:39:47.773838  115982 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 00:39:47.773871  115982 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 00:39:47.773908  115982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 00:39:47.778744  115982 command_runner.go:130] > 3ec20f2e
	I1210 00:39:47.778941  115982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:39:47.787614  115982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:39:47.797280  115982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:39:47.801201  115982 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:39:47.801263  115982 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:39:47.801305  115982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:39:47.806253  115982 command_runner.go:130] > b5213941
	I1210 00:39:47.806316  115982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:39:47.815330  115982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 00:39:47.825508  115982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 00:39:47.829552  115982 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 00:39:47.829632  115982 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 00:39:47.829673  115982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 00:39:47.834706  115982 command_runner.go:130] > 51391683
	I1210 00:39:47.834841  115982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 00:39:47.844143  115982 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:39:47.848228  115982 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:39:47.848251  115982 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1210 00:39:47.848260  115982 command_runner.go:130] > Device: 253,1	Inode: 4197422     Links: 1
	I1210 00:39:47.848270  115982 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 00:39:47.848279  115982 command_runner.go:130] > Access: 2024-12-10 00:33:05.105104742 +0000
	I1210 00:39:47.848286  115982 command_runner.go:130] > Modify: 2024-12-10 00:33:05.105104742 +0000
	I1210 00:39:47.848295  115982 command_runner.go:130] > Change: 2024-12-10 00:33:05.105104742 +0000
	I1210 00:39:47.848307  115982 command_runner.go:130] >  Birth: 2024-12-10 00:33:05.105104742 +0000
	I1210 00:39:47.848359  115982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 00:39:47.853217  115982 command_runner.go:130] > Certificate will not expire
	I1210 00:39:47.853358  115982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 00:39:47.858468  115982 command_runner.go:130] > Certificate will not expire
	I1210 00:39:47.858700  115982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 00:39:47.863748  115982 command_runner.go:130] > Certificate will not expire
	I1210 00:39:47.863796  115982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 00:39:47.868732  115982 command_runner.go:130] > Certificate will not expire
	I1210 00:39:47.868885  115982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 00:39:47.874318  115982 command_runner.go:130] > Certificate will not expire
	I1210 00:39:47.874359  115982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 00:39:47.879647  115982 command_runner.go:130] > Certificate will not expire
	I1210 00:39:47.879712  115982 kubeadm.go:392] StartCluster: {Name:multinode-029725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-029725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.5 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:39:47.879860  115982 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 00:39:47.879921  115982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:39:47.913525  115982 command_runner.go:130] > ce548c1af69485ec97d783ef4bc8378553e7aebf25d71fd09e05ffa7af9717c2
	I1210 00:39:47.913549  115982 command_runner.go:130] > dca777e391b607cabcfc13faaf91e40f93367799c1170c18ede23cbf9b41744d
	I1210 00:39:47.913558  115982 command_runner.go:130] > 790a0091a09b4bcd3316230c192d0e740fb9f0154fc465a21c0fa9a3447ceed6
	I1210 00:39:47.913569  115982 command_runner.go:130] > 19f7ffc0fde3e65fb91a26d70a77ae44d898832e2aca60c36c529bc0b3e4e25c
	I1210 00:39:47.913639  115982 command_runner.go:130] > 304150d1330c5715e865e384bc6a2b004fe37ec1ece13812de4bd2d41ce9beeb
	I1210 00:39:47.913668  115982 command_runner.go:130] > fe3b27671e381d98f592554b1dc47b6ae16393c97ca933850b221d9de963a187
	I1210 00:39:47.913680  115982 command_runner.go:130] > d9f66ffa76d335747f03a3eebab3b6bec74775761d3bfe63d475bb68a6487a48
	I1210 00:39:47.913761  115982 command_runner.go:130] > d33cbe88741979478ea3e99fbcc0c59bb3eabafa2402a6fa3748cef7f2ce4695
	I1210 00:39:47.915063  115982 cri.go:89] found id: "ce548c1af69485ec97d783ef4bc8378553e7aebf25d71fd09e05ffa7af9717c2"
	I1210 00:39:47.915076  115982 cri.go:89] found id: "dca777e391b607cabcfc13faaf91e40f93367799c1170c18ede23cbf9b41744d"
	I1210 00:39:47.915079  115982 cri.go:89] found id: "790a0091a09b4bcd3316230c192d0e740fb9f0154fc465a21c0fa9a3447ceed6"
	I1210 00:39:47.915083  115982 cri.go:89] found id: "19f7ffc0fde3e65fb91a26d70a77ae44d898832e2aca60c36c529bc0b3e4e25c"
	I1210 00:39:47.915087  115982 cri.go:89] found id: "304150d1330c5715e865e384bc6a2b004fe37ec1ece13812de4bd2d41ce9beeb"
	I1210 00:39:47.915092  115982 cri.go:89] found id: "fe3b27671e381d98f592554b1dc47b6ae16393c97ca933850b221d9de963a187"
	I1210 00:39:47.915096  115982 cri.go:89] found id: "d9f66ffa76d335747f03a3eebab3b6bec74775761d3bfe63d475bb68a6487a48"
	I1210 00:39:47.915100  115982 cri.go:89] found id: "d33cbe88741979478ea3e99fbcc0c59bb3eabafa2402a6fa3748cef7f2ce4695"
	I1210 00:39:47.915104  115982 cri.go:89] found id: ""
	I1210 00:39:47.915158  115982 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-029725 -n multinode-029725
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-029725 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (323.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (144.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 stop
E1210 00:43:12.359065   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-029725 stop: exit status 82 (2m0.449597333s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-029725-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-029725 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-029725 status: (18.637803826s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-029725 status --alsologtostderr: (3.359527805s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-029725 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-029725 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-029725 -n multinode-029725
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-029725 logs -n 25: (1.879793152s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-029725 ssh -n                                                                 | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-029725 cp multinode-029725-m02:/home/docker/cp-test.txt                       | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725:/home/docker/cp-test_multinode-029725-m02_multinode-029725.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-029725 ssh -n                                                                 | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-029725 ssh -n multinode-029725 sudo cat                                       | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | /home/docker/cp-test_multinode-029725-m02_multinode-029725.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-029725 cp multinode-029725-m02:/home/docker/cp-test.txt                       | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725-m03:/home/docker/cp-test_multinode-029725-m02_multinode-029725-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-029725 ssh -n                                                                 | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-029725 ssh -n multinode-029725-m03 sudo cat                                   | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | /home/docker/cp-test_multinode-029725-m02_multinode-029725-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-029725 cp testdata/cp-test.txt                                                | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-029725 ssh -n                                                                 | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-029725 cp multinode-029725-m03:/home/docker/cp-test.txt                       | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4291806726/001/cp-test_multinode-029725-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-029725 ssh -n                                                                 | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-029725 cp multinode-029725-m03:/home/docker/cp-test.txt                       | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725:/home/docker/cp-test_multinode-029725-m03_multinode-029725.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-029725 ssh -n                                                                 | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-029725 ssh -n multinode-029725 sudo cat                                       | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | /home/docker/cp-test_multinode-029725-m03_multinode-029725.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-029725 cp multinode-029725-m03:/home/docker/cp-test.txt                       | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725-m02:/home/docker/cp-test_multinode-029725-m03_multinode-029725-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-029725 ssh -n                                                                 | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-029725 ssh -n multinode-029725-m02 sudo cat                                   | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | /home/docker/cp-test_multinode-029725-m03_multinode-029725-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-029725 node stop m03                                                          | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	| node    | multinode-029725 node start                                                             | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:36 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-029725                                                                | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:36 UTC |                     |
	| stop    | -p multinode-029725                                                                     | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:36 UTC |                     |
	| start   | -p multinode-029725                                                                     | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:38 UTC | 10 Dec 24 00:41 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-029725                                                                | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:41 UTC |                     |
	| node    | multinode-029725 node delete                                                            | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:41 UTC | 10 Dec 24 00:41 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-029725 stop                                                                   | multinode-029725 | jenkins | v1.34.0 | 10 Dec 24 00:41 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/10 00:38:14
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 00:38:14.190179  115982 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:38:14.190289  115982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:38:14.190298  115982 out.go:358] Setting ErrFile to fd 2...
	I1210 00:38:14.190302  115982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:38:14.190498  115982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 00:38:14.191028  115982 out.go:352] Setting JSON to false
	I1210 00:38:14.191870  115982 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":8445,"bootTime":1733782649,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:38:14.191974  115982 start.go:139] virtualization: kvm guest
	I1210 00:38:14.194008  115982 out.go:177] * [multinode-029725] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 00:38:14.195576  115982 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 00:38:14.195570  115982 notify.go:220] Checking for updates...
	I1210 00:38:14.197009  115982 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:38:14.198170  115982 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:38:14.199201  115982 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:38:14.200232  115982 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:38:14.201383  115982 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:38:14.203476  115982 config.go:182] Loaded profile config "multinode-029725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:38:14.203575  115982 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:38:14.204032  115982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:38:14.204094  115982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:38:14.219050  115982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42013
	I1210 00:38:14.219530  115982 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:38:14.220090  115982 main.go:141] libmachine: Using API Version  1
	I1210 00:38:14.220109  115982 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:38:14.220455  115982 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:38:14.220641  115982 main.go:141] libmachine: (multinode-029725) Calling .DriverName
	I1210 00:38:14.254760  115982 out.go:177] * Using the kvm2 driver based on existing profile
	I1210 00:38:14.255876  115982 start.go:297] selected driver: kvm2
	I1210 00:38:14.255886  115982 start.go:901] validating driver "kvm2" against &{Name:multinode-029725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-029725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.5 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fa
lse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:38:14.256023  115982 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:38:14.256323  115982 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:38:14.256394  115982 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 00:38:14.270620  115982 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1210 00:38:14.271282  115982 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:38:14.271333  115982 cni.go:84] Creating CNI manager for ""
	I1210 00:38:14.271395  115982 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1210 00:38:14.271453  115982 start.go:340] cluster config:
	{Name:multinode-029725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-029725 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.5 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner
:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:38:14.271572  115982 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:38:14.273140  115982 out.go:177] * Starting "multinode-029725" primary control-plane node in "multinode-029725" cluster
	I1210 00:38:14.274391  115982 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:38:14.274427  115982 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1210 00:38:14.274437  115982 cache.go:56] Caching tarball of preloaded images
	I1210 00:38:14.274511  115982 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 00:38:14.274521  115982 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 00:38:14.274657  115982 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725/config.json ...
	I1210 00:38:14.274865  115982 start.go:360] acquireMachinesLock for multinode-029725: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:38:14.274910  115982 start.go:364] duration metric: took 26.103µs to acquireMachinesLock for "multinode-029725"
	I1210 00:38:14.274924  115982 start.go:96] Skipping create...Using existing machine configuration
	I1210 00:38:14.274932  115982 fix.go:54] fixHost starting: 
	I1210 00:38:14.275174  115982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:38:14.275203  115982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:38:14.288820  115982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34753
	I1210 00:38:14.289259  115982 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:38:14.289653  115982 main.go:141] libmachine: Using API Version  1
	I1210 00:38:14.289677  115982 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:38:14.290032  115982 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:38:14.290207  115982 main.go:141] libmachine: (multinode-029725) Calling .DriverName
	I1210 00:38:14.290353  115982 main.go:141] libmachine: (multinode-029725) Calling .GetState
	I1210 00:38:14.291871  115982 fix.go:112] recreateIfNeeded on multinode-029725: state=Running err=<nil>
	W1210 00:38:14.291901  115982 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 00:38:14.293691  115982 out.go:177] * Updating the running kvm2 "multinode-029725" VM ...
	I1210 00:38:14.294875  115982 machine.go:93] provisionDockerMachine start ...
	I1210 00:38:14.294893  115982 main.go:141] libmachine: (multinode-029725) Calling .DriverName
	I1210 00:38:14.295081  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHHostname
	I1210 00:38:14.297772  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.298256  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:38:14.298296  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.298488  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHPort
	I1210 00:38:14.298697  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:38:14.298849  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:38:14.298954  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHUsername
	I1210 00:38:14.299070  115982 main.go:141] libmachine: Using SSH client type: native
	I1210 00:38:14.299256  115982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1210 00:38:14.299266  115982 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 00:38:14.403279  115982 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-029725
	
	I1210 00:38:14.403307  115982 main.go:141] libmachine: (multinode-029725) Calling .GetMachineName
	I1210 00:38:14.403534  115982 buildroot.go:166] provisioning hostname "multinode-029725"
	I1210 00:38:14.403553  115982 main.go:141] libmachine: (multinode-029725) Calling .GetMachineName
	I1210 00:38:14.403724  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHHostname
	I1210 00:38:14.406066  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.406409  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:38:14.406435  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.406593  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHPort
	I1210 00:38:14.406748  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:38:14.406878  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:38:14.406984  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHUsername
	I1210 00:38:14.407102  115982 main.go:141] libmachine: Using SSH client type: native
	I1210 00:38:14.407286  115982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1210 00:38:14.407298  115982 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-029725 && echo "multinode-029725" | sudo tee /etc/hostname
	I1210 00:38:14.527246  115982 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-029725
	
	I1210 00:38:14.527279  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHHostname
	I1210 00:38:14.530053  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.530426  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:38:14.530469  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.530617  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHPort
	I1210 00:38:14.530801  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:38:14.530963  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:38:14.531093  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHUsername
	I1210 00:38:14.531249  115982 main.go:141] libmachine: Using SSH client type: native
	I1210 00:38:14.531407  115982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1210 00:38:14.531425  115982 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-029725' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-029725/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-029725' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 00:38:14.638030  115982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:38:14.638069  115982 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 00:38:14.638097  115982 buildroot.go:174] setting up certificates
	I1210 00:38:14.638117  115982 provision.go:84] configureAuth start
	I1210 00:38:14.638136  115982 main.go:141] libmachine: (multinode-029725) Calling .GetMachineName
	I1210 00:38:14.638429  115982 main.go:141] libmachine: (multinode-029725) Calling .GetIP
	I1210 00:38:14.641174  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.641530  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:38:14.641555  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.641702  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHHostname
	I1210 00:38:14.643918  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.644269  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:38:14.644304  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.644460  115982 provision.go:143] copyHostCerts
	I1210 00:38:14.644499  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:38:14.644536  115982 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 00:38:14.644553  115982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:38:14.644617  115982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 00:38:14.644703  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:38:14.644724  115982 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 00:38:14.644731  115982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:38:14.644756  115982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 00:38:14.644812  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:38:14.644833  115982 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 00:38:14.644839  115982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:38:14.644861  115982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 00:38:14.644920  115982 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.multinode-029725 san=[127.0.0.1 192.168.39.24 localhost minikube multinode-029725]
	I1210 00:38:14.693389  115982 provision.go:177] copyRemoteCerts
	I1210 00:38:14.693434  115982 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 00:38:14.693454  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHHostname
	I1210 00:38:14.695835  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.696134  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:38:14.696165  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.696278  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHPort
	I1210 00:38:14.696428  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:38:14.696602  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHUsername
	I1210 00:38:14.696707  115982 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/multinode-029725/id_rsa Username:docker}
	I1210 00:38:14.776128  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1210 00:38:14.776200  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 00:38:14.798594  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1210 00:38:14.798636  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 00:38:14.824214  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1210 00:38:14.824275  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1210 00:38:14.846865  115982 provision.go:87] duration metric: took 208.732774ms to configureAuth
	I1210 00:38:14.846886  115982 buildroot.go:189] setting minikube options for container-runtime
	I1210 00:38:14.847099  115982 config.go:182] Loaded profile config "multinode-029725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:38:14.847176  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHHostname
	I1210 00:38:14.849833  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.850161  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:38:14.850189  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:38:14.850429  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHPort
	I1210 00:38:14.850628  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:38:14.850807  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:38:14.850930  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHUsername
	I1210 00:38:14.851082  115982 main.go:141] libmachine: Using SSH client type: native
	I1210 00:38:14.851287  115982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1210 00:38:14.851303  115982 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 00:39:45.474499  115982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 00:39:45.474529  115982 machine.go:96] duration metric: took 1m31.179639995s to provisionDockerMachine
	I1210 00:39:45.474546  115982 start.go:293] postStartSetup for "multinode-029725" (driver="kvm2")
	I1210 00:39:45.474578  115982 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 00:39:45.474606  115982 main.go:141] libmachine: (multinode-029725) Calling .DriverName
	I1210 00:39:45.475048  115982 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 00:39:45.475086  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHHostname
	I1210 00:39:45.477988  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:39:45.478420  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:39:45.478445  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:39:45.478644  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHPort
	I1210 00:39:45.478851  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:39:45.479019  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHUsername
	I1210 00:39:45.479168  115982 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/multinode-029725/id_rsa Username:docker}
	I1210 00:39:45.561763  115982 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 00:39:45.565214  115982 command_runner.go:130] > NAME=Buildroot
	I1210 00:39:45.565232  115982 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1210 00:39:45.565251  115982 command_runner.go:130] > ID=buildroot
	I1210 00:39:45.565259  115982 command_runner.go:130] > VERSION_ID=2023.02.9
	I1210 00:39:45.565270  115982 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1210 00:39:45.565358  115982 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 00:39:45.565381  115982 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 00:39:45.565458  115982 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 00:39:45.565565  115982 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 00:39:45.565580  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /etc/ssl/certs/862962.pem
	I1210 00:39:45.565713  115982 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 00:39:45.573890  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:39:45.595346  115982 start.go:296] duration metric: took 120.786681ms for postStartSetup
	I1210 00:39:45.595397  115982 fix.go:56] duration metric: took 1m31.320463472s for fixHost
	I1210 00:39:45.595423  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHHostname
	I1210 00:39:45.597962  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:39:45.598320  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:39:45.598346  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:39:45.598507  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHPort
	I1210 00:39:45.598674  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:39:45.598839  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:39:45.598955  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHUsername
	I1210 00:39:45.599093  115982 main.go:141] libmachine: Using SSH client type: native
	I1210 00:39:45.599308  115982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1210 00:39:45.599323  115982 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 00:39:45.698410  115982 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733791185.674166324
	
	I1210 00:39:45.698435  115982 fix.go:216] guest clock: 1733791185.674166324
	I1210 00:39:45.698445  115982 fix.go:229] Guest: 2024-12-10 00:39:45.674166324 +0000 UTC Remote: 2024-12-10 00:39:45.595403119 +0000 UTC m=+91.444659181 (delta=78.763205ms)
	I1210 00:39:45.698493  115982 fix.go:200] guest clock delta is within tolerance: 78.763205ms
	I1210 00:39:45.698506  115982 start.go:83] releasing machines lock for "multinode-029725", held for 1m31.423586478s
	I1210 00:39:45.698533  115982 main.go:141] libmachine: (multinode-029725) Calling .DriverName
	I1210 00:39:45.698818  115982 main.go:141] libmachine: (multinode-029725) Calling .GetIP
	I1210 00:39:45.701390  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:39:45.701741  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:39:45.701769  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:39:45.701941  115982 main.go:141] libmachine: (multinode-029725) Calling .DriverName
	I1210 00:39:45.702425  115982 main.go:141] libmachine: (multinode-029725) Calling .DriverName
	I1210 00:39:45.702617  115982 main.go:141] libmachine: (multinode-029725) Calling .DriverName
	I1210 00:39:45.702703  115982 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 00:39:45.702757  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHHostname
	I1210 00:39:45.702862  115982 ssh_runner.go:195] Run: cat /version.json
	I1210 00:39:45.702887  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHHostname
	I1210 00:39:45.705191  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:39:45.705499  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:39:45.705526  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:39:45.705594  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:39:45.705696  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHPort
	I1210 00:39:45.705889  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:39:45.706062  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHUsername
	I1210 00:39:45.706080  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:39:45.706114  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:39:45.706218  115982 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/multinode-029725/id_rsa Username:docker}
	I1210 00:39:45.706281  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHPort
	I1210 00:39:45.706436  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:39:45.706614  115982 main.go:141] libmachine: (multinode-029725) Calling .GetSSHUsername
	I1210 00:39:45.706747  115982 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/multinode-029725/id_rsa Username:docker}
	I1210 00:39:45.782484  115982 command_runner.go:130] > {"iso_version": "v1.34.0-1730913550-19917", "kicbase_version": "v0.0.45-1730888964-19917", "minikube_version": "v1.34.0", "commit": "72f43dde5d92c8ae490d0727dad53fb3ed6aa41e"}
	I1210 00:39:45.782855  115982 ssh_runner.go:195] Run: systemctl --version
	I1210 00:39:45.802553  115982 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1210 00:39:45.802605  115982 command_runner.go:130] > systemd 252 (252)
	I1210 00:39:45.802622  115982 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1210 00:39:45.802681  115982 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 00:39:45.966307  115982 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1210 00:39:45.972133  115982 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1210 00:39:45.972256  115982 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 00:39:45.972326  115982 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 00:39:45.981373  115982 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 00:39:45.981394  115982 start.go:495] detecting cgroup driver to use...
	I1210 00:39:45.981484  115982 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 00:39:45.998843  115982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 00:39:46.013150  115982 docker.go:217] disabling cri-docker service (if available) ...
	I1210 00:39:46.013215  115982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 00:39:46.027740  115982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 00:39:46.042073  115982 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 00:39:46.196596  115982 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 00:39:46.327771  115982 docker.go:233] disabling docker service ...
	I1210 00:39:46.327841  115982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 00:39:46.344809  115982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 00:39:46.357520  115982 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 00:39:46.489238  115982 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 00:39:46.623593  115982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 00:39:46.636063  115982 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 00:39:46.653089  115982 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1210 00:39:46.653512  115982 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 00:39:46.653574  115982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:39:46.663053  115982 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 00:39:46.663117  115982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:39:46.672299  115982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:39:46.681370  115982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:39:46.690423  115982 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 00:39:46.699716  115982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:39:46.708727  115982 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:39:46.718437  115982 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:39:46.727592  115982 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 00:39:46.735896  115982 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1210 00:39:46.735967  115982 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 00:39:46.743996  115982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:39:46.872593  115982 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 00:39:47.051539  115982 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 00:39:47.051623  115982 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 00:39:47.056412  115982 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1210 00:39:47.056437  115982 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1210 00:39:47.056466  115982 command_runner.go:130] > Device: 0,22	Inode: 1279        Links: 1
	I1210 00:39:47.056481  115982 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 00:39:47.056490  115982 command_runner.go:130] > Access: 2024-12-10 00:39:46.931179943 +0000
	I1210 00:39:47.056507  115982 command_runner.go:130] > Modify: 2024-12-10 00:39:46.931179943 +0000
	I1210 00:39:47.056516  115982 command_runner.go:130] > Change: 2024-12-10 00:39:46.931179943 +0000
	I1210 00:39:47.056524  115982 command_runner.go:130] >  Birth: -
	I1210 00:39:47.056780  115982 start.go:563] Will wait 60s for crictl version
	I1210 00:39:47.056830  115982 ssh_runner.go:195] Run: which crictl
	I1210 00:39:47.060330  115982 command_runner.go:130] > /usr/bin/crictl
	I1210 00:39:47.060403  115982 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:39:47.096766  115982 command_runner.go:130] > Version:  0.1.0
	I1210 00:39:47.096785  115982 command_runner.go:130] > RuntimeName:  cri-o
	I1210 00:39:47.096789  115982 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1210 00:39:47.096794  115982 command_runner.go:130] > RuntimeApiVersion:  v1
	I1210 00:39:47.096904  115982 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 00:39:47.096958  115982 ssh_runner.go:195] Run: crio --version
	I1210 00:39:47.121883  115982 command_runner.go:130] > crio version 1.29.1
	I1210 00:39:47.121898  115982 command_runner.go:130] > Version:        1.29.1
	I1210 00:39:47.121903  115982 command_runner.go:130] > GitCommit:      unknown
	I1210 00:39:47.121908  115982 command_runner.go:130] > GitCommitDate:  unknown
	I1210 00:39:47.121911  115982 command_runner.go:130] > GitTreeState:   clean
	I1210 00:39:47.121916  115982 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1210 00:39:47.121920  115982 command_runner.go:130] > GoVersion:      go1.21.6
	I1210 00:39:47.121924  115982 command_runner.go:130] > Compiler:       gc
	I1210 00:39:47.121930  115982 command_runner.go:130] > Platform:       linux/amd64
	I1210 00:39:47.121936  115982 command_runner.go:130] > Linkmode:       dynamic
	I1210 00:39:47.121943  115982 command_runner.go:130] > BuildTags:      
	I1210 00:39:47.121951  115982 command_runner.go:130] >   containers_image_ostree_stub
	I1210 00:39:47.121958  115982 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1210 00:39:47.121967  115982 command_runner.go:130] >   btrfs_noversion
	I1210 00:39:47.121972  115982 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1210 00:39:47.121986  115982 command_runner.go:130] >   libdm_no_deferred_remove
	I1210 00:39:47.121992  115982 command_runner.go:130] >   seccomp
	I1210 00:39:47.121997  115982 command_runner.go:130] > LDFlags:          unknown
	I1210 00:39:47.122001  115982 command_runner.go:130] > SeccompEnabled:   true
	I1210 00:39:47.122006  115982 command_runner.go:130] > AppArmorEnabled:  false
	I1210 00:39:47.122165  115982 ssh_runner.go:195] Run: crio --version
	I1210 00:39:47.149671  115982 command_runner.go:130] > crio version 1.29.1
	I1210 00:39:47.149697  115982 command_runner.go:130] > Version:        1.29.1
	I1210 00:39:47.149720  115982 command_runner.go:130] > GitCommit:      unknown
	I1210 00:39:47.149727  115982 command_runner.go:130] > GitCommitDate:  unknown
	I1210 00:39:47.149734  115982 command_runner.go:130] > GitTreeState:   clean
	I1210 00:39:47.149743  115982 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1210 00:39:47.149752  115982 command_runner.go:130] > GoVersion:      go1.21.6
	I1210 00:39:47.149756  115982 command_runner.go:130] > Compiler:       gc
	I1210 00:39:47.149761  115982 command_runner.go:130] > Platform:       linux/amd64
	I1210 00:39:47.149765  115982 command_runner.go:130] > Linkmode:       dynamic
	I1210 00:39:47.149771  115982 command_runner.go:130] > BuildTags:      
	I1210 00:39:47.149775  115982 command_runner.go:130] >   containers_image_ostree_stub
	I1210 00:39:47.149780  115982 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1210 00:39:47.149783  115982 command_runner.go:130] >   btrfs_noversion
	I1210 00:39:47.149788  115982 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1210 00:39:47.149795  115982 command_runner.go:130] >   libdm_no_deferred_remove
	I1210 00:39:47.149798  115982 command_runner.go:130] >   seccomp
	I1210 00:39:47.149803  115982 command_runner.go:130] > LDFlags:          unknown
	I1210 00:39:47.149807  115982 command_runner.go:130] > SeccompEnabled:   true
	I1210 00:39:47.149813  115982 command_runner.go:130] > AppArmorEnabled:  false
	I1210 00:39:47.151805  115982 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 00:39:47.153239  115982 main.go:141] libmachine: (multinode-029725) Calling .GetIP
	I1210 00:39:47.155974  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:39:47.156318  115982 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:39:47.156340  115982 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:39:47.156539  115982 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 00:39:47.160327  115982 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1210 00:39:47.160440  115982 kubeadm.go:883] updating cluster {Name:multinode-029725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-029725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.5 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 00:39:47.160610  115982 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:39:47.160665  115982 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:39:47.198321  115982 command_runner.go:130] > {
	I1210 00:39:47.198341  115982 command_runner.go:130] >   "images": [
	I1210 00:39:47.198346  115982 command_runner.go:130] >     {
	I1210 00:39:47.198354  115982 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1210 00:39:47.198358  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.198364  115982 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1210 00:39:47.198368  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198372  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.198380  115982 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1210 00:39:47.198387  115982 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1210 00:39:47.198390  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198402  115982 command_runner.go:130] >       "size": "94965812",
	I1210 00:39:47.198408  115982 command_runner.go:130] >       "uid": null,
	I1210 00:39:47.198417  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.198424  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.198432  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.198435  115982 command_runner.go:130] >     },
	I1210 00:39:47.198438  115982 command_runner.go:130] >     {
	I1210 00:39:47.198444  115982 command_runner.go:130] >       "id": "50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e",
	I1210 00:39:47.198451  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.198456  115982 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241108-5c6d2daf"
	I1210 00:39:47.198462  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198466  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.198474  115982 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3",
	I1210 00:39:47.198480  115982 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"
	I1210 00:39:47.198484  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198488  115982 command_runner.go:130] >       "size": "94963761",
	I1210 00:39:47.198492  115982 command_runner.go:130] >       "uid": null,
	I1210 00:39:47.198499  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.198503  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.198507  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.198513  115982 command_runner.go:130] >     },
	I1210 00:39:47.198516  115982 command_runner.go:130] >     {
	I1210 00:39:47.198522  115982 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1210 00:39:47.198527  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.198531  115982 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1210 00:39:47.198535  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198540  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.198547  115982 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1210 00:39:47.198555  115982 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1210 00:39:47.198571  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198575  115982 command_runner.go:130] >       "size": "1363676",
	I1210 00:39:47.198582  115982 command_runner.go:130] >       "uid": null,
	I1210 00:39:47.198585  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.198594  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.198598  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.198601  115982 command_runner.go:130] >     },
	I1210 00:39:47.198605  115982 command_runner.go:130] >     {
	I1210 00:39:47.198611  115982 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1210 00:39:47.198615  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.198622  115982 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 00:39:47.198626  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198630  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.198638  115982 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1210 00:39:47.198651  115982 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1210 00:39:47.198657  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198661  115982 command_runner.go:130] >       "size": "31470524",
	I1210 00:39:47.198664  115982 command_runner.go:130] >       "uid": null,
	I1210 00:39:47.198668  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.198679  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.198683  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.198687  115982 command_runner.go:130] >     },
	I1210 00:39:47.198690  115982 command_runner.go:130] >     {
	I1210 00:39:47.198696  115982 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1210 00:39:47.198702  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.198707  115982 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1210 00:39:47.198710  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198715  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.198724  115982 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1210 00:39:47.198731  115982 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1210 00:39:47.198737  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198740  115982 command_runner.go:130] >       "size": "63273227",
	I1210 00:39:47.198744  115982 command_runner.go:130] >       "uid": null,
	I1210 00:39:47.198748  115982 command_runner.go:130] >       "username": "nonroot",
	I1210 00:39:47.198751  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.198755  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.198758  115982 command_runner.go:130] >     },
	I1210 00:39:47.198768  115982 command_runner.go:130] >     {
	I1210 00:39:47.198777  115982 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1210 00:39:47.198781  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.198786  115982 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1210 00:39:47.198789  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198793  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.198799  115982 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1210 00:39:47.198806  115982 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1210 00:39:47.198809  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198813  115982 command_runner.go:130] >       "size": "149009664",
	I1210 00:39:47.198817  115982 command_runner.go:130] >       "uid": {
	I1210 00:39:47.198821  115982 command_runner.go:130] >         "value": "0"
	I1210 00:39:47.198825  115982 command_runner.go:130] >       },
	I1210 00:39:47.198828  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.198832  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.198836  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.198839  115982 command_runner.go:130] >     },
	I1210 00:39:47.198842  115982 command_runner.go:130] >     {
	I1210 00:39:47.198848  115982 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1210 00:39:47.198854  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.198859  115982 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1210 00:39:47.198862  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198866  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.198873  115982 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1210 00:39:47.198882  115982 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1210 00:39:47.198886  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198890  115982 command_runner.go:130] >       "size": "95274464",
	I1210 00:39:47.198894  115982 command_runner.go:130] >       "uid": {
	I1210 00:39:47.198898  115982 command_runner.go:130] >         "value": "0"
	I1210 00:39:47.198901  115982 command_runner.go:130] >       },
	I1210 00:39:47.198905  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.198911  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.198915  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.198923  115982 command_runner.go:130] >     },
	I1210 00:39:47.198929  115982 command_runner.go:130] >     {
	I1210 00:39:47.198934  115982 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1210 00:39:47.198941  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.198945  115982 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1210 00:39:47.198949  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198952  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.198972  115982 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1210 00:39:47.198986  115982 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1210 00:39:47.198989  115982 command_runner.go:130] >       ],
	I1210 00:39:47.198993  115982 command_runner.go:130] >       "size": "89474374",
	I1210 00:39:47.198997  115982 command_runner.go:130] >       "uid": {
	I1210 00:39:47.199004  115982 command_runner.go:130] >         "value": "0"
	I1210 00:39:47.199007  115982 command_runner.go:130] >       },
	I1210 00:39:47.199010  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.199014  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.199017  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.199020  115982 command_runner.go:130] >     },
	I1210 00:39:47.199023  115982 command_runner.go:130] >     {
	I1210 00:39:47.199029  115982 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1210 00:39:47.199032  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.199037  115982 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1210 00:39:47.199040  115982 command_runner.go:130] >       ],
	I1210 00:39:47.199044  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.199050  115982 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1210 00:39:47.199057  115982 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1210 00:39:47.199060  115982 command_runner.go:130] >       ],
	I1210 00:39:47.199064  115982 command_runner.go:130] >       "size": "92783513",
	I1210 00:39:47.199068  115982 command_runner.go:130] >       "uid": null,
	I1210 00:39:47.199071  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.199074  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.199078  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.199080  115982 command_runner.go:130] >     },
	I1210 00:39:47.199088  115982 command_runner.go:130] >     {
	I1210 00:39:47.199094  115982 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1210 00:39:47.199097  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.199102  115982 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1210 00:39:47.199105  115982 command_runner.go:130] >       ],
	I1210 00:39:47.199109  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.199115  115982 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1210 00:39:47.199122  115982 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1210 00:39:47.199125  115982 command_runner.go:130] >       ],
	I1210 00:39:47.199128  115982 command_runner.go:130] >       "size": "68457798",
	I1210 00:39:47.199134  115982 command_runner.go:130] >       "uid": {
	I1210 00:39:47.199138  115982 command_runner.go:130] >         "value": "0"
	I1210 00:39:47.199141  115982 command_runner.go:130] >       },
	I1210 00:39:47.199145  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.199156  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.199160  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.199163  115982 command_runner.go:130] >     },
	I1210 00:39:47.199167  115982 command_runner.go:130] >     {
	I1210 00:39:47.199172  115982 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1210 00:39:47.199178  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.199182  115982 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1210 00:39:47.199186  115982 command_runner.go:130] >       ],
	I1210 00:39:47.199189  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.199198  115982 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1210 00:39:47.199206  115982 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1210 00:39:47.199210  115982 command_runner.go:130] >       ],
	I1210 00:39:47.199213  115982 command_runner.go:130] >       "size": "742080",
	I1210 00:39:47.199217  115982 command_runner.go:130] >       "uid": {
	I1210 00:39:47.199221  115982 command_runner.go:130] >         "value": "65535"
	I1210 00:39:47.199224  115982 command_runner.go:130] >       },
	I1210 00:39:47.199228  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.199232  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.199236  115982 command_runner.go:130] >       "pinned": true
	I1210 00:39:47.199246  115982 command_runner.go:130] >     }
	I1210 00:39:47.199249  115982 command_runner.go:130] >   ]
	I1210 00:39:47.199252  115982 command_runner.go:130] > }
	I1210 00:39:47.199867  115982 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 00:39:47.199883  115982 crio.go:433] Images already preloaded, skipping extraction
	I1210 00:39:47.199926  115982 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:39:47.227942  115982 command_runner.go:130] > {
	I1210 00:39:47.227960  115982 command_runner.go:130] >   "images": [
	I1210 00:39:47.227964  115982 command_runner.go:130] >     {
	I1210 00:39:47.227971  115982 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1210 00:39:47.227976  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.227982  115982 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1210 00:39:47.227985  115982 command_runner.go:130] >       ],
	I1210 00:39:47.227989  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.227997  115982 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1210 00:39:47.228004  115982 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1210 00:39:47.228007  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228011  115982 command_runner.go:130] >       "size": "94965812",
	I1210 00:39:47.228015  115982 command_runner.go:130] >       "uid": null,
	I1210 00:39:47.228019  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.228047  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.228060  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.228063  115982 command_runner.go:130] >     },
	I1210 00:39:47.228067  115982 command_runner.go:130] >     {
	I1210 00:39:47.228072  115982 command_runner.go:130] >       "id": "50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e",
	I1210 00:39:47.228076  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.228084  115982 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241108-5c6d2daf"
	I1210 00:39:47.228087  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228091  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.228097  115982 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3",
	I1210 00:39:47.228104  115982 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"
	I1210 00:39:47.228108  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228112  115982 command_runner.go:130] >       "size": "94963761",
	I1210 00:39:47.228118  115982 command_runner.go:130] >       "uid": null,
	I1210 00:39:47.228125  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.228135  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.228140  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.228143  115982 command_runner.go:130] >     },
	I1210 00:39:47.228155  115982 command_runner.go:130] >     {
	I1210 00:39:47.228164  115982 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1210 00:39:47.228168  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.228172  115982 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1210 00:39:47.228179  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228182  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.228189  115982 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1210 00:39:47.228195  115982 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1210 00:39:47.228199  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228203  115982 command_runner.go:130] >       "size": "1363676",
	I1210 00:39:47.228207  115982 command_runner.go:130] >       "uid": null,
	I1210 00:39:47.228213  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.228220  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.228224  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.228227  115982 command_runner.go:130] >     },
	I1210 00:39:47.228230  115982 command_runner.go:130] >     {
	I1210 00:39:47.228236  115982 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1210 00:39:47.228242  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.228247  115982 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1210 00:39:47.228250  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228253  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.228260  115982 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1210 00:39:47.228277  115982 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1210 00:39:47.228281  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228285  115982 command_runner.go:130] >       "size": "31470524",
	I1210 00:39:47.228288  115982 command_runner.go:130] >       "uid": null,
	I1210 00:39:47.228292  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.228295  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.228299  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.228303  115982 command_runner.go:130] >     },
	I1210 00:39:47.228311  115982 command_runner.go:130] >     {
	I1210 00:39:47.228319  115982 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1210 00:39:47.228322  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.228328  115982 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1210 00:39:47.228332  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228335  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.228342  115982 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1210 00:39:47.228349  115982 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1210 00:39:47.228353  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228357  115982 command_runner.go:130] >       "size": "63273227",
	I1210 00:39:47.228363  115982 command_runner.go:130] >       "uid": null,
	I1210 00:39:47.228367  115982 command_runner.go:130] >       "username": "nonroot",
	I1210 00:39:47.228371  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.228374  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.228378  115982 command_runner.go:130] >     },
	I1210 00:39:47.228389  115982 command_runner.go:130] >     {
	I1210 00:39:47.228395  115982 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1210 00:39:47.228400  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.228404  115982 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1210 00:39:47.228408  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228412  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.228420  115982 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1210 00:39:47.228429  115982 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1210 00:39:47.228433  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228437  115982 command_runner.go:130] >       "size": "149009664",
	I1210 00:39:47.228441  115982 command_runner.go:130] >       "uid": {
	I1210 00:39:47.228444  115982 command_runner.go:130] >         "value": "0"
	I1210 00:39:47.228450  115982 command_runner.go:130] >       },
	I1210 00:39:47.228456  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.228460  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.228464  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.228467  115982 command_runner.go:130] >     },
	I1210 00:39:47.228470  115982 command_runner.go:130] >     {
	I1210 00:39:47.228480  115982 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1210 00:39:47.228486  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.228491  115982 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1210 00:39:47.228497  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228500  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.228507  115982 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1210 00:39:47.228517  115982 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1210 00:39:47.228520  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228524  115982 command_runner.go:130] >       "size": "95274464",
	I1210 00:39:47.228527  115982 command_runner.go:130] >       "uid": {
	I1210 00:39:47.228531  115982 command_runner.go:130] >         "value": "0"
	I1210 00:39:47.228537  115982 command_runner.go:130] >       },
	I1210 00:39:47.228541  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.228545  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.228549  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.228552  115982 command_runner.go:130] >     },
	I1210 00:39:47.228555  115982 command_runner.go:130] >     {
	I1210 00:39:47.228561  115982 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1210 00:39:47.228567  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.228572  115982 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1210 00:39:47.228575  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228579  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.228598  115982 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1210 00:39:47.228608  115982 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1210 00:39:47.228612  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228615  115982 command_runner.go:130] >       "size": "89474374",
	I1210 00:39:47.228619  115982 command_runner.go:130] >       "uid": {
	I1210 00:39:47.228623  115982 command_runner.go:130] >         "value": "0"
	I1210 00:39:47.228626  115982 command_runner.go:130] >       },
	I1210 00:39:47.228630  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.228633  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.228636  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.228639  115982 command_runner.go:130] >     },
	I1210 00:39:47.228647  115982 command_runner.go:130] >     {
	I1210 00:39:47.228655  115982 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1210 00:39:47.228659  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.228665  115982 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1210 00:39:47.228669  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228676  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.228683  115982 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1210 00:39:47.228694  115982 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1210 00:39:47.228700  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228704  115982 command_runner.go:130] >       "size": "92783513",
	I1210 00:39:47.228708  115982 command_runner.go:130] >       "uid": null,
	I1210 00:39:47.228712  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.228716  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.228719  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.228723  115982 command_runner.go:130] >     },
	I1210 00:39:47.228726  115982 command_runner.go:130] >     {
	I1210 00:39:47.228731  115982 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1210 00:39:47.228737  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.228742  115982 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1210 00:39:47.228747  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228751  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.228758  115982 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1210 00:39:47.228767  115982 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1210 00:39:47.228770  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228774  115982 command_runner.go:130] >       "size": "68457798",
	I1210 00:39:47.228778  115982 command_runner.go:130] >       "uid": {
	I1210 00:39:47.228782  115982 command_runner.go:130] >         "value": "0"
	I1210 00:39:47.228785  115982 command_runner.go:130] >       },
	I1210 00:39:47.228789  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.228792  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.228796  115982 command_runner.go:130] >       "pinned": false
	I1210 00:39:47.228800  115982 command_runner.go:130] >     },
	I1210 00:39:47.228809  115982 command_runner.go:130] >     {
	I1210 00:39:47.228822  115982 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1210 00:39:47.228829  115982 command_runner.go:130] >       "repoTags": [
	I1210 00:39:47.228833  115982 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1210 00:39:47.228839  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228842  115982 command_runner.go:130] >       "repoDigests": [
	I1210 00:39:47.228848  115982 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1210 00:39:47.228857  115982 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1210 00:39:47.228861  115982 command_runner.go:130] >       ],
	I1210 00:39:47.228866  115982 command_runner.go:130] >       "size": "742080",
	I1210 00:39:47.228870  115982 command_runner.go:130] >       "uid": {
	I1210 00:39:47.228876  115982 command_runner.go:130] >         "value": "65535"
	I1210 00:39:47.228879  115982 command_runner.go:130] >       },
	I1210 00:39:47.228883  115982 command_runner.go:130] >       "username": "",
	I1210 00:39:47.228889  115982 command_runner.go:130] >       "spec": null,
	I1210 00:39:47.228892  115982 command_runner.go:130] >       "pinned": true
	I1210 00:39:47.228896  115982 command_runner.go:130] >     }
	I1210 00:39:47.228899  115982 command_runner.go:130] >   ]
	I1210 00:39:47.228907  115982 command_runner.go:130] > }
	I1210 00:39:47.229462  115982 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 00:39:47.229477  115982 cache_images.go:84] Images are preloaded, skipping loading
	I1210 00:39:47.229486  115982 kubeadm.go:934] updating node { 192.168.39.24 8443 v1.31.2 crio true true} ...
	I1210 00:39:47.229598  115982 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-029725 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-029725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:39:47.229665  115982 ssh_runner.go:195] Run: crio config
	I1210 00:39:47.258833  115982 command_runner.go:130] ! time="2024-12-10 00:39:47.234608337Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1210 00:39:47.265305  115982 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1210 00:39:47.272805  115982 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1210 00:39:47.272833  115982 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1210 00:39:47.272844  115982 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1210 00:39:47.272849  115982 command_runner.go:130] > #
	I1210 00:39:47.272863  115982 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1210 00:39:47.272877  115982 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1210 00:39:47.272889  115982 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1210 00:39:47.272902  115982 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1210 00:39:47.272911  115982 command_runner.go:130] > # reload'.
	I1210 00:39:47.272935  115982 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1210 00:39:47.272948  115982 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1210 00:39:47.272959  115982 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1210 00:39:47.272971  115982 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1210 00:39:47.272978  115982 command_runner.go:130] > [crio]
	I1210 00:39:47.272987  115982 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1210 00:39:47.272998  115982 command_runner.go:130] > # containers images, in this directory.
	I1210 00:39:47.273008  115982 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1210 00:39:47.273024  115982 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1210 00:39:47.273034  115982 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1210 00:39:47.273049  115982 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1210 00:39:47.273058  115982 command_runner.go:130] > # imagestore = ""
	I1210 00:39:47.273068  115982 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1210 00:39:47.273083  115982 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1210 00:39:47.273091  115982 command_runner.go:130] > storage_driver = "overlay"
	I1210 00:39:47.273096  115982 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1210 00:39:47.273104  115982 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1210 00:39:47.273110  115982 command_runner.go:130] > storage_option = [
	I1210 00:39:47.273116  115982 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1210 00:39:47.273119  115982 command_runner.go:130] > ]
	I1210 00:39:47.273128  115982 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1210 00:39:47.273137  115982 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1210 00:39:47.273144  115982 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1210 00:39:47.273149  115982 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1210 00:39:47.273157  115982 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1210 00:39:47.273161  115982 command_runner.go:130] > # always happen on a node reboot
	I1210 00:39:47.273166  115982 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1210 00:39:47.273182  115982 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1210 00:39:47.273190  115982 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1210 00:39:47.273195  115982 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1210 00:39:47.273200  115982 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1210 00:39:47.273207  115982 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1210 00:39:47.273216  115982 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1210 00:39:47.273220  115982 command_runner.go:130] > # internal_wipe = true
	I1210 00:39:47.273228  115982 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1210 00:39:47.273241  115982 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1210 00:39:47.273248  115982 command_runner.go:130] > # internal_repair = false
	I1210 00:39:47.273253  115982 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1210 00:39:47.273262  115982 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1210 00:39:47.273267  115982 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1210 00:39:47.273274  115982 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1210 00:39:47.273281  115982 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1210 00:39:47.273285  115982 command_runner.go:130] > [crio.api]
	I1210 00:39:47.273293  115982 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1210 00:39:47.273297  115982 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1210 00:39:47.273305  115982 command_runner.go:130] > # IP address on which the stream server will listen.
	I1210 00:39:47.273313  115982 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1210 00:39:47.273322  115982 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1210 00:39:47.273327  115982 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1210 00:39:47.273333  115982 command_runner.go:130] > # stream_port = "0"
	I1210 00:39:47.273338  115982 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1210 00:39:47.273344  115982 command_runner.go:130] > # stream_enable_tls = false
	I1210 00:39:47.273349  115982 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1210 00:39:47.273356  115982 command_runner.go:130] > # stream_idle_timeout = ""
	I1210 00:39:47.273364  115982 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1210 00:39:47.273370  115982 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1210 00:39:47.273374  115982 command_runner.go:130] > # minutes.
	I1210 00:39:47.273382  115982 command_runner.go:130] > # stream_tls_cert = ""
	I1210 00:39:47.273390  115982 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1210 00:39:47.273396  115982 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1210 00:39:47.273402  115982 command_runner.go:130] > # stream_tls_key = ""
	I1210 00:39:47.273407  115982 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1210 00:39:47.273413  115982 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1210 00:39:47.273432  115982 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1210 00:39:47.273439  115982 command_runner.go:130] > # stream_tls_ca = ""
	I1210 00:39:47.273446  115982 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1210 00:39:47.273453  115982 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1210 00:39:47.273459  115982 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1210 00:39:47.273466  115982 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1210 00:39:47.273471  115982 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1210 00:39:47.273477  115982 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1210 00:39:47.273481  115982 command_runner.go:130] > [crio.runtime]
	I1210 00:39:47.273487  115982 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1210 00:39:47.273494  115982 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1210 00:39:47.273498  115982 command_runner.go:130] > # "nofile=1024:2048"
	I1210 00:39:47.273507  115982 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1210 00:39:47.273511  115982 command_runner.go:130] > # default_ulimits = [
	I1210 00:39:47.273517  115982 command_runner.go:130] > # ]
	I1210 00:39:47.273522  115982 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1210 00:39:47.273533  115982 command_runner.go:130] > # no_pivot = false
	I1210 00:39:47.273541  115982 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1210 00:39:47.273547  115982 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1210 00:39:47.273554  115982 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1210 00:39:47.273559  115982 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1210 00:39:47.273566  115982 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1210 00:39:47.273572  115982 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1210 00:39:47.273579  115982 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1210 00:39:47.273583  115982 command_runner.go:130] > # Cgroup setting for conmon
	I1210 00:39:47.273589  115982 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1210 00:39:47.273594  115982 command_runner.go:130] > conmon_cgroup = "pod"
	I1210 00:39:47.273600  115982 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1210 00:39:47.273607  115982 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1210 00:39:47.273616  115982 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1210 00:39:47.273622  115982 command_runner.go:130] > conmon_env = [
	I1210 00:39:47.273627  115982 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1210 00:39:47.273632  115982 command_runner.go:130] > ]
	I1210 00:39:47.273637  115982 command_runner.go:130] > # Additional environment variables to set for all the
	I1210 00:39:47.273642  115982 command_runner.go:130] > # containers. These are overridden if set in the
	I1210 00:39:47.273649  115982 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1210 00:39:47.273653  115982 command_runner.go:130] > # default_env = [
	I1210 00:39:47.273656  115982 command_runner.go:130] > # ]
	I1210 00:39:47.273662  115982 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1210 00:39:47.273669  115982 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1210 00:39:47.273673  115982 command_runner.go:130] > # selinux = false
	I1210 00:39:47.273679  115982 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1210 00:39:47.273687  115982 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1210 00:39:47.273692  115982 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1210 00:39:47.273698  115982 command_runner.go:130] > # seccomp_profile = ""
	I1210 00:39:47.273704  115982 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1210 00:39:47.273711  115982 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1210 00:39:47.273717  115982 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1210 00:39:47.273723  115982 command_runner.go:130] > # which might increase security.
	I1210 00:39:47.273732  115982 command_runner.go:130] > # This option is currently deprecated,
	I1210 00:39:47.273740  115982 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1210 00:39:47.273745  115982 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1210 00:39:47.273753  115982 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1210 00:39:47.273758  115982 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1210 00:39:47.273767  115982 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1210 00:39:47.273772  115982 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1210 00:39:47.273780  115982 command_runner.go:130] > # This option supports live configuration reload.
	I1210 00:39:47.273784  115982 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1210 00:39:47.273789  115982 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1210 00:39:47.273796  115982 command_runner.go:130] > # the cgroup blockio controller.
	I1210 00:39:47.273800  115982 command_runner.go:130] > # blockio_config_file = ""
	I1210 00:39:47.273806  115982 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1210 00:39:47.273812  115982 command_runner.go:130] > # blockio parameters.
	I1210 00:39:47.273816  115982 command_runner.go:130] > # blockio_reload = false
	I1210 00:39:47.273824  115982 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1210 00:39:47.273828  115982 command_runner.go:130] > # irqbalance daemon.
	I1210 00:39:47.273835  115982 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1210 00:39:47.273847  115982 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1210 00:39:47.273861  115982 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1210 00:39:47.273874  115982 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1210 00:39:47.273883  115982 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1210 00:39:47.273895  115982 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1210 00:39:47.273905  115982 command_runner.go:130] > # This option supports live configuration reload.
	I1210 00:39:47.273914  115982 command_runner.go:130] > # rdt_config_file = ""
	I1210 00:39:47.273922  115982 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1210 00:39:47.273930  115982 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1210 00:39:47.273962  115982 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1210 00:39:47.273970  115982 command_runner.go:130] > # separate_pull_cgroup = ""
	I1210 00:39:47.273975  115982 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1210 00:39:47.273981  115982 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1210 00:39:47.273985  115982 command_runner.go:130] > # will be added.
	I1210 00:39:47.273989  115982 command_runner.go:130] > # default_capabilities = [
	I1210 00:39:47.274000  115982 command_runner.go:130] > # 	"CHOWN",
	I1210 00:39:47.274004  115982 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1210 00:39:47.274008  115982 command_runner.go:130] > # 	"FSETID",
	I1210 00:39:47.274011  115982 command_runner.go:130] > # 	"FOWNER",
	I1210 00:39:47.274014  115982 command_runner.go:130] > # 	"SETGID",
	I1210 00:39:47.274017  115982 command_runner.go:130] > # 	"SETUID",
	I1210 00:39:47.274020  115982 command_runner.go:130] > # 	"SETPCAP",
	I1210 00:39:47.274024  115982 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1210 00:39:47.274027  115982 command_runner.go:130] > # 	"KILL",
	I1210 00:39:47.274030  115982 command_runner.go:130] > # ]
	I1210 00:39:47.274037  115982 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1210 00:39:47.274046  115982 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1210 00:39:47.274052  115982 command_runner.go:130] > # add_inheritable_capabilities = false
	I1210 00:39:47.274057  115982 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1210 00:39:47.274063  115982 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1210 00:39:47.274067  115982 command_runner.go:130] > default_sysctls = [
	I1210 00:39:47.274071  115982 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1210 00:39:47.274074  115982 command_runner.go:130] > ]
	I1210 00:39:47.274079  115982 command_runner.go:130] > # List of devices on the host that a
	I1210 00:39:47.274087  115982 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1210 00:39:47.274091  115982 command_runner.go:130] > # allowed_devices = [
	I1210 00:39:47.274094  115982 command_runner.go:130] > # 	"/dev/fuse",
	I1210 00:39:47.274097  115982 command_runner.go:130] > # ]
	I1210 00:39:47.274102  115982 command_runner.go:130] > # List of additional devices. specified as
	I1210 00:39:47.274109  115982 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1210 00:39:47.274114  115982 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1210 00:39:47.274121  115982 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1210 00:39:47.274128  115982 command_runner.go:130] > # additional_devices = [
	I1210 00:39:47.274133  115982 command_runner.go:130] > # ]
	I1210 00:39:47.274140  115982 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1210 00:39:47.274144  115982 command_runner.go:130] > # cdi_spec_dirs = [
	I1210 00:39:47.274147  115982 command_runner.go:130] > # 	"/etc/cdi",
	I1210 00:39:47.274151  115982 command_runner.go:130] > # 	"/var/run/cdi",
	I1210 00:39:47.274164  115982 command_runner.go:130] > # ]
	I1210 00:39:47.274173  115982 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1210 00:39:47.274178  115982 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1210 00:39:47.274183  115982 command_runner.go:130] > # Defaults to false.
	I1210 00:39:47.274188  115982 command_runner.go:130] > # device_ownership_from_security_context = false
	I1210 00:39:47.274199  115982 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1210 00:39:47.274207  115982 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1210 00:39:47.274211  115982 command_runner.go:130] > # hooks_dir = [
	I1210 00:39:47.274218  115982 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1210 00:39:47.274221  115982 command_runner.go:130] > # ]
	I1210 00:39:47.274227  115982 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1210 00:39:47.274235  115982 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1210 00:39:47.274240  115982 command_runner.go:130] > # its default mounts from the following two files:
	I1210 00:39:47.274246  115982 command_runner.go:130] > #
	I1210 00:39:47.274254  115982 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1210 00:39:47.274260  115982 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1210 00:39:47.274268  115982 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1210 00:39:47.274271  115982 command_runner.go:130] > #
	I1210 00:39:47.274276  115982 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1210 00:39:47.274285  115982 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1210 00:39:47.274291  115982 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1210 00:39:47.274295  115982 command_runner.go:130] > #      only add mounts it finds in this file.
	I1210 00:39:47.274301  115982 command_runner.go:130] > #
	I1210 00:39:47.274305  115982 command_runner.go:130] > # default_mounts_file = ""
	I1210 00:39:47.274310  115982 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1210 00:39:47.274319  115982 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1210 00:39:47.274322  115982 command_runner.go:130] > pids_limit = 1024
	I1210 00:39:47.274328  115982 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1210 00:39:47.274335  115982 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1210 00:39:47.274341  115982 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1210 00:39:47.274350  115982 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1210 00:39:47.274355  115982 command_runner.go:130] > # log_size_max = -1
	I1210 00:39:47.274362  115982 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1210 00:39:47.274376  115982 command_runner.go:130] > # log_to_journald = false
	I1210 00:39:47.274389  115982 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1210 00:39:47.274394  115982 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1210 00:39:47.274399  115982 command_runner.go:130] > # Path to directory for container attach sockets.
	I1210 00:39:47.274404  115982 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1210 00:39:47.274410  115982 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1210 00:39:47.274414  115982 command_runner.go:130] > # bind_mount_prefix = ""
	I1210 00:39:47.274420  115982 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1210 00:39:47.274426  115982 command_runner.go:130] > # read_only = false
	I1210 00:39:47.274431  115982 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1210 00:39:47.274440  115982 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1210 00:39:47.274443  115982 command_runner.go:130] > # live configuration reload.
	I1210 00:39:47.274447  115982 command_runner.go:130] > # log_level = "info"
	I1210 00:39:47.274452  115982 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1210 00:39:47.274459  115982 command_runner.go:130] > # This option supports live configuration reload.
	I1210 00:39:47.274463  115982 command_runner.go:130] > # log_filter = ""
	I1210 00:39:47.274471  115982 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1210 00:39:47.274477  115982 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1210 00:39:47.274481  115982 command_runner.go:130] > # separated by comma.
	I1210 00:39:47.274488  115982 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 00:39:47.274494  115982 command_runner.go:130] > # uid_mappings = ""
	I1210 00:39:47.274500  115982 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1210 00:39:47.274505  115982 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1210 00:39:47.274512  115982 command_runner.go:130] > # separated by comma.
	I1210 00:39:47.274518  115982 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 00:39:47.274524  115982 command_runner.go:130] > # gid_mappings = ""
	I1210 00:39:47.274530  115982 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1210 00:39:47.274538  115982 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1210 00:39:47.274543  115982 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1210 00:39:47.274553  115982 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 00:39:47.274557  115982 command_runner.go:130] > # minimum_mappable_uid = -1
	I1210 00:39:47.274578  115982 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1210 00:39:47.274591  115982 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1210 00:39:47.274603  115982 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1210 00:39:47.274613  115982 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1210 00:39:47.274619  115982 command_runner.go:130] > # minimum_mappable_gid = -1
	I1210 00:39:47.274627  115982 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1210 00:39:47.274634  115982 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1210 00:39:47.274641  115982 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1210 00:39:47.274645  115982 command_runner.go:130] > # ctr_stop_timeout = 30
	I1210 00:39:47.274652  115982 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1210 00:39:47.274662  115982 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1210 00:39:47.274669  115982 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1210 00:39:47.274674  115982 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1210 00:39:47.274680  115982 command_runner.go:130] > drop_infra_ctr = false
	I1210 00:39:47.274686  115982 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1210 00:39:47.274694  115982 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1210 00:39:47.274700  115982 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1210 00:39:47.274706  115982 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1210 00:39:47.274712  115982 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1210 00:39:47.274720  115982 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1210 00:39:47.274725  115982 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1210 00:39:47.274733  115982 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1210 00:39:47.274737  115982 command_runner.go:130] > # shared_cpuset = ""
	I1210 00:39:47.274745  115982 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1210 00:39:47.274749  115982 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1210 00:39:47.274756  115982 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1210 00:39:47.274763  115982 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1210 00:39:47.274769  115982 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1210 00:39:47.274774  115982 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1210 00:39:47.274783  115982 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1210 00:39:47.274787  115982 command_runner.go:130] > # enable_criu_support = false
	I1210 00:39:47.274794  115982 command_runner.go:130] > # Enable/disable the generation of the container,
	I1210 00:39:47.274800  115982 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1210 00:39:47.274804  115982 command_runner.go:130] > # enable_pod_events = false
	I1210 00:39:47.274810  115982 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1210 00:39:47.274823  115982 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1210 00:39:47.274830  115982 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1210 00:39:47.274835  115982 command_runner.go:130] > # default_runtime = "runc"
	I1210 00:39:47.274846  115982 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1210 00:39:47.274866  115982 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1210 00:39:47.274882  115982 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1210 00:39:47.274897  115982 command_runner.go:130] > # creation as a file is not desired either.
	I1210 00:39:47.274911  115982 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1210 00:39:47.274919  115982 command_runner.go:130] > # the hostname is being managed dynamically.
	I1210 00:39:47.274923  115982 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1210 00:39:47.274937  115982 command_runner.go:130] > # ]
	I1210 00:39:47.274946  115982 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1210 00:39:47.274952  115982 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1210 00:39:47.274958  115982 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1210 00:39:47.274963  115982 command_runner.go:130] > # Each entry in the table should follow the format:
	I1210 00:39:47.274968  115982 command_runner.go:130] > #
	I1210 00:39:47.274972  115982 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1210 00:39:47.274977  115982 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1210 00:39:47.275025  115982 command_runner.go:130] > # runtime_type = "oci"
	I1210 00:39:47.275032  115982 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1210 00:39:47.275037  115982 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1210 00:39:47.275042  115982 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1210 00:39:47.275049  115982 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1210 00:39:47.275053  115982 command_runner.go:130] > # monitor_env = []
	I1210 00:39:47.275060  115982 command_runner.go:130] > # privileged_without_host_devices = false
	I1210 00:39:47.275063  115982 command_runner.go:130] > # allowed_annotations = []
	I1210 00:39:47.275068  115982 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1210 00:39:47.275073  115982 command_runner.go:130] > # Where:
	I1210 00:39:47.275078  115982 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1210 00:39:47.275086  115982 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1210 00:39:47.275092  115982 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1210 00:39:47.275098  115982 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1210 00:39:47.275101  115982 command_runner.go:130] > #   in $PATH.
	I1210 00:39:47.275122  115982 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1210 00:39:47.275131  115982 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1210 00:39:47.275138  115982 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1210 00:39:47.275143  115982 command_runner.go:130] > #   state.
	I1210 00:39:47.275149  115982 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1210 00:39:47.275157  115982 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1210 00:39:47.275163  115982 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1210 00:39:47.275170  115982 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1210 00:39:47.275177  115982 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1210 00:39:47.275185  115982 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1210 00:39:47.275191  115982 command_runner.go:130] > #   The currently recognized values are:
	I1210 00:39:47.275199  115982 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1210 00:39:47.275206  115982 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1210 00:39:47.275213  115982 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1210 00:39:47.275219  115982 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1210 00:39:47.275228  115982 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1210 00:39:47.275234  115982 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1210 00:39:47.275242  115982 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1210 00:39:47.275248  115982 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1210 00:39:47.275256  115982 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1210 00:39:47.275263  115982 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1210 00:39:47.275270  115982 command_runner.go:130] > #   deprecated option "conmon".
	I1210 00:39:47.275276  115982 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1210 00:39:47.275282  115982 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1210 00:39:47.275290  115982 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1210 00:39:47.275295  115982 command_runner.go:130] > #   should be moved to the container's cgroup
	I1210 00:39:47.275304  115982 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1210 00:39:47.275311  115982 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1210 00:39:47.275317  115982 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1210 00:39:47.275325  115982 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1210 00:39:47.275328  115982 command_runner.go:130] > #
	I1210 00:39:47.275332  115982 command_runner.go:130] > # Using the seccomp notifier feature:
	I1210 00:39:47.275335  115982 command_runner.go:130] > #
	I1210 00:39:47.275346  115982 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1210 00:39:47.275354  115982 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1210 00:39:47.275357  115982 command_runner.go:130] > #
	I1210 00:39:47.275363  115982 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1210 00:39:47.275369  115982 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1210 00:39:47.275372  115982 command_runner.go:130] > #
	I1210 00:39:47.275382  115982 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1210 00:39:47.275388  115982 command_runner.go:130] > # feature.
	I1210 00:39:47.275391  115982 command_runner.go:130] > #
	I1210 00:39:47.275397  115982 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1210 00:39:47.275404  115982 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1210 00:39:47.275411  115982 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1210 00:39:47.275419  115982 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1210 00:39:47.275425  115982 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1210 00:39:47.275428  115982 command_runner.go:130] > #
	I1210 00:39:47.275434  115982 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1210 00:39:47.275442  115982 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1210 00:39:47.275445  115982 command_runner.go:130] > #
	I1210 00:39:47.275451  115982 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1210 00:39:47.275459  115982 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1210 00:39:47.275462  115982 command_runner.go:130] > #
	I1210 00:39:47.275467  115982 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1210 00:39:47.275475  115982 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1210 00:39:47.275479  115982 command_runner.go:130] > # limitation.
	I1210 00:39:47.275485  115982 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1210 00:39:47.275491  115982 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1210 00:39:47.275495  115982 command_runner.go:130] > runtime_type = "oci"
	I1210 00:39:47.275500  115982 command_runner.go:130] > runtime_root = "/run/runc"
	I1210 00:39:47.275504  115982 command_runner.go:130] > runtime_config_path = ""
	I1210 00:39:47.275509  115982 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1210 00:39:47.275514  115982 command_runner.go:130] > monitor_cgroup = "pod"
	I1210 00:39:47.275518  115982 command_runner.go:130] > monitor_exec_cgroup = ""
	I1210 00:39:47.275524  115982 command_runner.go:130] > monitor_env = [
	I1210 00:39:47.275539  115982 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1210 00:39:47.275545  115982 command_runner.go:130] > ]
	I1210 00:39:47.275552  115982 command_runner.go:130] > privileged_without_host_devices = false
	I1210 00:39:47.275560  115982 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1210 00:39:47.275566  115982 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1210 00:39:47.275573  115982 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1210 00:39:47.275580  115982 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1210 00:39:47.275589  115982 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1210 00:39:47.275597  115982 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1210 00:39:47.275605  115982 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1210 00:39:47.275614  115982 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1210 00:39:47.275620  115982 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1210 00:39:47.275626  115982 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1210 00:39:47.275629  115982 command_runner.go:130] > # Example:
	I1210 00:39:47.275633  115982 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1210 00:39:47.275637  115982 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1210 00:39:47.275644  115982 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1210 00:39:47.275649  115982 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1210 00:39:47.275652  115982 command_runner.go:130] > # cpuset = 0
	I1210 00:39:47.275656  115982 command_runner.go:130] > # cpushares = "0-1"
	I1210 00:39:47.275659  115982 command_runner.go:130] > # Where:
	I1210 00:39:47.275663  115982 command_runner.go:130] > # The workload name is workload-type.
	I1210 00:39:47.275669  115982 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1210 00:39:47.275674  115982 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1210 00:39:47.275678  115982 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1210 00:39:47.275686  115982 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1210 00:39:47.275691  115982 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1210 00:39:47.275695  115982 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1210 00:39:47.275701  115982 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1210 00:39:47.275705  115982 command_runner.go:130] > # Default value is set to true
	I1210 00:39:47.275709  115982 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1210 00:39:47.275714  115982 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1210 00:39:47.275719  115982 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1210 00:39:47.275729  115982 command_runner.go:130] > # Default value is set to 'false'
	I1210 00:39:47.275733  115982 command_runner.go:130] > # disable_hostport_mapping = false
	I1210 00:39:47.275738  115982 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1210 00:39:47.275741  115982 command_runner.go:130] > #
	I1210 00:39:47.275746  115982 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1210 00:39:47.275751  115982 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1210 00:39:47.275757  115982 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1210 00:39:47.275762  115982 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1210 00:39:47.275767  115982 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1210 00:39:47.275770  115982 command_runner.go:130] > [crio.image]
	I1210 00:39:47.275775  115982 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1210 00:39:47.275779  115982 command_runner.go:130] > # default_transport = "docker://"
	I1210 00:39:47.275784  115982 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1210 00:39:47.275790  115982 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1210 00:39:47.275794  115982 command_runner.go:130] > # global_auth_file = ""
	I1210 00:39:47.275798  115982 command_runner.go:130] > # The image used to instantiate infra containers.
	I1210 00:39:47.275805  115982 command_runner.go:130] > # This option supports live configuration reload.
	I1210 00:39:47.275809  115982 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1210 00:39:47.275815  115982 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1210 00:39:47.275820  115982 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1210 00:39:47.275824  115982 command_runner.go:130] > # This option supports live configuration reload.
	I1210 00:39:47.275831  115982 command_runner.go:130] > # pause_image_auth_file = ""
	I1210 00:39:47.275842  115982 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1210 00:39:47.275851  115982 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1210 00:39:47.275863  115982 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1210 00:39:47.275875  115982 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1210 00:39:47.275885  115982 command_runner.go:130] > # pause_command = "/pause"
	I1210 00:39:47.275894  115982 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1210 00:39:47.275906  115982 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1210 00:39:47.275918  115982 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1210 00:39:47.275928  115982 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1210 00:39:47.275936  115982 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1210 00:39:47.275942  115982 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1210 00:39:47.275955  115982 command_runner.go:130] > # pinned_images = [
	I1210 00:39:47.275960  115982 command_runner.go:130] > # ]
	I1210 00:39:47.275966  115982 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1210 00:39:47.275973  115982 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1210 00:39:47.275978  115982 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1210 00:39:47.275986  115982 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1210 00:39:47.275991  115982 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1210 00:39:47.275997  115982 command_runner.go:130] > # signature_policy = ""
	I1210 00:39:47.276002  115982 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1210 00:39:47.276009  115982 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1210 00:39:47.276016  115982 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1210 00:39:47.276022  115982 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1210 00:39:47.276027  115982 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1210 00:39:47.276034  115982 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1210 00:39:47.276040  115982 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1210 00:39:47.276048  115982 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1210 00:39:47.276052  115982 command_runner.go:130] > # changing them here.
	I1210 00:39:47.276056  115982 command_runner.go:130] > # insecure_registries = [
	I1210 00:39:47.276059  115982 command_runner.go:130] > # ]
	I1210 00:39:47.276065  115982 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1210 00:39:47.276070  115982 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1210 00:39:47.276075  115982 command_runner.go:130] > # image_volumes = "mkdir"
	I1210 00:39:47.276081  115982 command_runner.go:130] > # Temporary directory to use for storing big files
	I1210 00:39:47.276085  115982 command_runner.go:130] > # big_files_temporary_dir = ""
	I1210 00:39:47.276093  115982 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1210 00:39:47.276099  115982 command_runner.go:130] > # CNI plugins.
	I1210 00:39:47.276102  115982 command_runner.go:130] > [crio.network]
	I1210 00:39:47.276108  115982 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1210 00:39:47.276113  115982 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1210 00:39:47.276118  115982 command_runner.go:130] > # cni_default_network = ""
	I1210 00:39:47.276124  115982 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1210 00:39:47.276132  115982 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1210 00:39:47.276139  115982 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1210 00:39:47.276147  115982 command_runner.go:130] > # plugin_dirs = [
	I1210 00:39:47.276154  115982 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1210 00:39:47.276157  115982 command_runner.go:130] > # ]
	I1210 00:39:47.276165  115982 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1210 00:39:47.276172  115982 command_runner.go:130] > [crio.metrics]
	I1210 00:39:47.276176  115982 command_runner.go:130] > # Globally enable or disable metrics support.
	I1210 00:39:47.276180  115982 command_runner.go:130] > enable_metrics = true
	I1210 00:39:47.276184  115982 command_runner.go:130] > # Specify enabled metrics collectors.
	I1210 00:39:47.276191  115982 command_runner.go:130] > # Per default all metrics are enabled.
	I1210 00:39:47.276197  115982 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1210 00:39:47.276205  115982 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1210 00:39:47.276210  115982 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1210 00:39:47.276216  115982 command_runner.go:130] > # metrics_collectors = [
	I1210 00:39:47.276220  115982 command_runner.go:130] > # 	"operations",
	I1210 00:39:47.276224  115982 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1210 00:39:47.276229  115982 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1210 00:39:47.276233  115982 command_runner.go:130] > # 	"operations_errors",
	I1210 00:39:47.276238  115982 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1210 00:39:47.276243  115982 command_runner.go:130] > # 	"image_pulls_by_name",
	I1210 00:39:47.276248  115982 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1210 00:39:47.276254  115982 command_runner.go:130] > # 	"image_pulls_failures",
	I1210 00:39:47.276258  115982 command_runner.go:130] > # 	"image_pulls_successes",
	I1210 00:39:47.276262  115982 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1210 00:39:47.276268  115982 command_runner.go:130] > # 	"image_layer_reuse",
	I1210 00:39:47.276273  115982 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1210 00:39:47.276283  115982 command_runner.go:130] > # 	"containers_oom_total",
	I1210 00:39:47.276287  115982 command_runner.go:130] > # 	"containers_oom",
	I1210 00:39:47.276291  115982 command_runner.go:130] > # 	"processes_defunct",
	I1210 00:39:47.276295  115982 command_runner.go:130] > # 	"operations_total",
	I1210 00:39:47.276303  115982 command_runner.go:130] > # 	"operations_latency_seconds",
	I1210 00:39:47.276310  115982 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1210 00:39:47.276314  115982 command_runner.go:130] > # 	"operations_errors_total",
	I1210 00:39:47.276320  115982 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1210 00:39:47.276329  115982 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1210 00:39:47.276336  115982 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1210 00:39:47.276340  115982 command_runner.go:130] > # 	"image_pulls_success_total",
	I1210 00:39:47.276348  115982 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1210 00:39:47.276352  115982 command_runner.go:130] > # 	"containers_oom_count_total",
	I1210 00:39:47.276359  115982 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1210 00:39:47.276363  115982 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1210 00:39:47.276366  115982 command_runner.go:130] > # ]
	I1210 00:39:47.276371  115982 command_runner.go:130] > # The port on which the metrics server will listen.
	I1210 00:39:47.276377  115982 command_runner.go:130] > # metrics_port = 9090
	I1210 00:39:47.276385  115982 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1210 00:39:47.276391  115982 command_runner.go:130] > # metrics_socket = ""
	I1210 00:39:47.276399  115982 command_runner.go:130] > # The certificate for the secure metrics server.
	I1210 00:39:47.276407  115982 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1210 00:39:47.276413  115982 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1210 00:39:47.276420  115982 command_runner.go:130] > # certificate on any modification event.
	I1210 00:39:47.276424  115982 command_runner.go:130] > # metrics_cert = ""
	I1210 00:39:47.276432  115982 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1210 00:39:47.276436  115982 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1210 00:39:47.276443  115982 command_runner.go:130] > # metrics_key = ""
	I1210 00:39:47.276448  115982 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1210 00:39:47.276451  115982 command_runner.go:130] > [crio.tracing]
	I1210 00:39:47.276457  115982 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1210 00:39:47.276462  115982 command_runner.go:130] > # enable_tracing = false
	I1210 00:39:47.276467  115982 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1210 00:39:47.276474  115982 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1210 00:39:47.276481  115982 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1210 00:39:47.276488  115982 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1210 00:39:47.276492  115982 command_runner.go:130] > # CRI-O NRI configuration.
	I1210 00:39:47.276496  115982 command_runner.go:130] > [crio.nri]
	I1210 00:39:47.276500  115982 command_runner.go:130] > # Globally enable or disable NRI.
	I1210 00:39:47.276512  115982 command_runner.go:130] > # enable_nri = false
	I1210 00:39:47.276519  115982 command_runner.go:130] > # NRI socket to listen on.
	I1210 00:39:47.276528  115982 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1210 00:39:47.276534  115982 command_runner.go:130] > # NRI plugin directory to use.
	I1210 00:39:47.276539  115982 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1210 00:39:47.276543  115982 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1210 00:39:47.276550  115982 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1210 00:39:47.276555  115982 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1210 00:39:47.276561  115982 command_runner.go:130] > # nri_disable_connections = false
	I1210 00:39:47.276565  115982 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1210 00:39:47.276569  115982 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1210 00:39:47.276576  115982 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1210 00:39:47.276581  115982 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1210 00:39:47.276586  115982 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1210 00:39:47.276590  115982 command_runner.go:130] > [crio.stats]
	I1210 00:39:47.276597  115982 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1210 00:39:47.276605  115982 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1210 00:39:47.276609  115982 command_runner.go:130] > # stats_collection_period = 0
	I1210 00:39:47.276694  115982 cni.go:84] Creating CNI manager for ""
	I1210 00:39:47.276705  115982 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1210 00:39:47.276714  115982 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 00:39:47.276740  115982 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.24 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-029725 NodeName:multinode-029725 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 00:39:47.276884  115982 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-029725"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.24"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.24"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 00:39:47.276962  115982 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:39:47.287127  115982 command_runner.go:130] > kubeadm
	I1210 00:39:47.287148  115982 command_runner.go:130] > kubectl
	I1210 00:39:47.287154  115982 command_runner.go:130] > kubelet
	I1210 00:39:47.287183  115982 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 00:39:47.287244  115982 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 00:39:47.296503  115982 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1210 00:39:47.311915  115982 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:39:47.327316  115982 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1210 00:39:47.342248  115982 ssh_runner.go:195] Run: grep 192.168.39.24	control-plane.minikube.internal$ /etc/hosts
	I1210 00:39:47.345683  115982 command_runner.go:130] > 192.168.39.24	control-plane.minikube.internal
	I1210 00:39:47.345746  115982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:39:47.483796  115982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:39:47.498651  115982 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725 for IP: 192.168.39.24
	I1210 00:39:47.498677  115982 certs.go:194] generating shared ca certs ...
	I1210 00:39:47.498698  115982 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:39:47.498883  115982 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 00:39:47.498951  115982 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 00:39:47.498966  115982 certs.go:256] generating profile certs ...
	I1210 00:39:47.499091  115982 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725/client.key
	I1210 00:39:47.499180  115982 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725/apiserver.key.e615d136
	I1210 00:39:47.499236  115982 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725/proxy-client.key
	I1210 00:39:47.499250  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1210 00:39:47.499266  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1210 00:39:47.499283  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1210 00:39:47.499312  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1210 00:39:47.499338  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1210 00:39:47.499355  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1210 00:39:47.499373  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1210 00:39:47.499398  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1210 00:39:47.499457  115982 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 00:39:47.499501  115982 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 00:39:47.499515  115982 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:39:47.499545  115982 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 00:39:47.499576  115982 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:39:47.499605  115982 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 00:39:47.500209  115982 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:39:47.500291  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem -> /usr/share/ca-certificates/86296.pem
	I1210 00:39:47.500321  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> /usr/share/ca-certificates/862962.pem
	I1210 00:39:47.500339  115982 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:39:47.501979  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:39:47.524820  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 00:39:47.546799  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:39:47.567917  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:39:47.589155  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 00:39:47.609850  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 00:39:47.632246  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:39:47.654556  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/multinode-029725/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:39:47.676321  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 00:39:47.697872  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 00:39:47.719353  115982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:39:47.740582  115982 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 00:39:47.755240  115982 ssh_runner.go:195] Run: openssl version
	I1210 00:39:47.760463  115982 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1210 00:39:47.760545  115982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 00:39:47.769963  115982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 00:39:47.773838  115982 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 00:39:47.773871  115982 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 00:39:47.773908  115982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 00:39:47.778744  115982 command_runner.go:130] > 3ec20f2e
	I1210 00:39:47.778941  115982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:39:47.787614  115982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:39:47.797280  115982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:39:47.801201  115982 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:39:47.801263  115982 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:39:47.801305  115982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:39:47.806253  115982 command_runner.go:130] > b5213941
	I1210 00:39:47.806316  115982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:39:47.815330  115982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 00:39:47.825508  115982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 00:39:47.829552  115982 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 00:39:47.829632  115982 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 00:39:47.829673  115982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 00:39:47.834706  115982 command_runner.go:130] > 51391683
	I1210 00:39:47.834841  115982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 00:39:47.844143  115982 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:39:47.848228  115982 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:39:47.848251  115982 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1210 00:39:47.848260  115982 command_runner.go:130] > Device: 253,1	Inode: 4197422     Links: 1
	I1210 00:39:47.848270  115982 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1210 00:39:47.848279  115982 command_runner.go:130] > Access: 2024-12-10 00:33:05.105104742 +0000
	I1210 00:39:47.848286  115982 command_runner.go:130] > Modify: 2024-12-10 00:33:05.105104742 +0000
	I1210 00:39:47.848295  115982 command_runner.go:130] > Change: 2024-12-10 00:33:05.105104742 +0000
	I1210 00:39:47.848307  115982 command_runner.go:130] >  Birth: 2024-12-10 00:33:05.105104742 +0000
	I1210 00:39:47.848359  115982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 00:39:47.853217  115982 command_runner.go:130] > Certificate will not expire
	I1210 00:39:47.853358  115982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 00:39:47.858468  115982 command_runner.go:130] > Certificate will not expire
	I1210 00:39:47.858700  115982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 00:39:47.863748  115982 command_runner.go:130] > Certificate will not expire
	I1210 00:39:47.863796  115982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 00:39:47.868732  115982 command_runner.go:130] > Certificate will not expire
	I1210 00:39:47.868885  115982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 00:39:47.874318  115982 command_runner.go:130] > Certificate will not expire
	I1210 00:39:47.874359  115982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 00:39:47.879647  115982 command_runner.go:130] > Certificate will not expire
	I1210 00:39:47.879712  115982 kubeadm.go:392] StartCluster: {Name:multinode-029725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-029725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.171 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.5 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:39:47.879860  115982 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 00:39:47.879921  115982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:39:47.913525  115982 command_runner.go:130] > ce548c1af69485ec97d783ef4bc8378553e7aebf25d71fd09e05ffa7af9717c2
	I1210 00:39:47.913549  115982 command_runner.go:130] > dca777e391b607cabcfc13faaf91e40f93367799c1170c18ede23cbf9b41744d
	I1210 00:39:47.913558  115982 command_runner.go:130] > 790a0091a09b4bcd3316230c192d0e740fb9f0154fc465a21c0fa9a3447ceed6
	I1210 00:39:47.913569  115982 command_runner.go:130] > 19f7ffc0fde3e65fb91a26d70a77ae44d898832e2aca60c36c529bc0b3e4e25c
	I1210 00:39:47.913639  115982 command_runner.go:130] > 304150d1330c5715e865e384bc6a2b004fe37ec1ece13812de4bd2d41ce9beeb
	I1210 00:39:47.913668  115982 command_runner.go:130] > fe3b27671e381d98f592554b1dc47b6ae16393c97ca933850b221d9de963a187
	I1210 00:39:47.913680  115982 command_runner.go:130] > d9f66ffa76d335747f03a3eebab3b6bec74775761d3bfe63d475bb68a6487a48
	I1210 00:39:47.913761  115982 command_runner.go:130] > d33cbe88741979478ea3e99fbcc0c59bb3eabafa2402a6fa3748cef7f2ce4695
	I1210 00:39:47.915063  115982 cri.go:89] found id: "ce548c1af69485ec97d783ef4bc8378553e7aebf25d71fd09e05ffa7af9717c2"
	I1210 00:39:47.915076  115982 cri.go:89] found id: "dca777e391b607cabcfc13faaf91e40f93367799c1170c18ede23cbf9b41744d"
	I1210 00:39:47.915079  115982 cri.go:89] found id: "790a0091a09b4bcd3316230c192d0e740fb9f0154fc465a21c0fa9a3447ceed6"
	I1210 00:39:47.915083  115982 cri.go:89] found id: "19f7ffc0fde3e65fb91a26d70a77ae44d898832e2aca60c36c529bc0b3e4e25c"
	I1210 00:39:47.915087  115982 cri.go:89] found id: "304150d1330c5715e865e384bc6a2b004fe37ec1ece13812de4bd2d41ce9beeb"
	I1210 00:39:47.915092  115982 cri.go:89] found id: "fe3b27671e381d98f592554b1dc47b6ae16393c97ca933850b221d9de963a187"
	I1210 00:39:47.915096  115982 cri.go:89] found id: "d9f66ffa76d335747f03a3eebab3b6bec74775761d3bfe63d475bb68a6487a48"
	I1210 00:39:47.915100  115982 cri.go:89] found id: "d33cbe88741979478ea3e99fbcc0c59bb3eabafa2402a6fa3748cef7f2ce4695"
	I1210 00:39:47.915104  115982 cri.go:89] found id: ""
	I1210 00:39:47.915158  115982 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-029725 -n multinode-029725
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-029725 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (144.92s)

                                                
                                    
x
+
TestPreload (161.96s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-961155 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-961155 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m32.780793901s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-961155 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-961155 image pull gcr.io/k8s-minikube/busybox: (2.399623177s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-961155
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-961155: (7.288040729s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-961155 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1210 00:50:09.289039   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-961155 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (56.526563463s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-961155 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-12-10 00:50:25.445652233 +0000 UTC m=+4018.525290385
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-961155 -n test-preload-961155
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-961155 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-961155 logs -n 25: (1.026622083s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-029725 ssh -n                                                                 | multinode-029725     | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-029725 ssh -n multinode-029725 sudo cat                                       | multinode-029725     | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | /home/docker/cp-test_multinode-029725-m03_multinode-029725.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-029725 cp multinode-029725-m03:/home/docker/cp-test.txt                       | multinode-029725     | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725-m02:/home/docker/cp-test_multinode-029725-m03_multinode-029725-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-029725 ssh -n                                                                 | multinode-029725     | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | multinode-029725-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-029725 ssh -n multinode-029725-m02 sudo cat                                   | multinode-029725     | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	|         | /home/docker/cp-test_multinode-029725-m03_multinode-029725-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-029725 node stop m03                                                          | multinode-029725     | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:35 UTC |
	| node    | multinode-029725 node start                                                             | multinode-029725     | jenkins | v1.34.0 | 10 Dec 24 00:35 UTC | 10 Dec 24 00:36 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-029725                                                                | multinode-029725     | jenkins | v1.34.0 | 10 Dec 24 00:36 UTC |                     |
	| stop    | -p multinode-029725                                                                     | multinode-029725     | jenkins | v1.34.0 | 10 Dec 24 00:36 UTC |                     |
	| start   | -p multinode-029725                                                                     | multinode-029725     | jenkins | v1.34.0 | 10 Dec 24 00:38 UTC | 10 Dec 24 00:41 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-029725                                                                | multinode-029725     | jenkins | v1.34.0 | 10 Dec 24 00:41 UTC |                     |
	| node    | multinode-029725 node delete                                                            | multinode-029725     | jenkins | v1.34.0 | 10 Dec 24 00:41 UTC | 10 Dec 24 00:41 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-029725 stop                                                                   | multinode-029725     | jenkins | v1.34.0 | 10 Dec 24 00:41 UTC |                     |
	| start   | -p multinode-029725                                                                     | multinode-029725     | jenkins | v1.34.0 | 10 Dec 24 00:44 UTC | 10 Dec 24 00:47 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-029725                                                                | multinode-029725     | jenkins | v1.34.0 | 10 Dec 24 00:47 UTC |                     |
	| start   | -p multinode-029725-m02                                                                 | multinode-029725-m02 | jenkins | v1.34.0 | 10 Dec 24 00:47 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-029725-m03                                                                 | multinode-029725-m03 | jenkins | v1.34.0 | 10 Dec 24 00:47 UTC | 10 Dec 24 00:47 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-029725                                                                 | multinode-029725     | jenkins | v1.34.0 | 10 Dec 24 00:47 UTC |                     |
	| delete  | -p multinode-029725-m03                                                                 | multinode-029725-m03 | jenkins | v1.34.0 | 10 Dec 24 00:47 UTC | 10 Dec 24 00:47 UTC |
	| delete  | -p multinode-029725                                                                     | multinode-029725     | jenkins | v1.34.0 | 10 Dec 24 00:47 UTC | 10 Dec 24 00:47 UTC |
	| start   | -p test-preload-961155                                                                  | test-preload-961155  | jenkins | v1.34.0 | 10 Dec 24 00:47 UTC | 10 Dec 24 00:49 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-961155 image pull                                                          | test-preload-961155  | jenkins | v1.34.0 | 10 Dec 24 00:49 UTC | 10 Dec 24 00:49 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-961155                                                                  | test-preload-961155  | jenkins | v1.34.0 | 10 Dec 24 00:49 UTC | 10 Dec 24 00:49 UTC |
	| start   | -p test-preload-961155                                                                  | test-preload-961155  | jenkins | v1.34.0 | 10 Dec 24 00:49 UTC | 10 Dec 24 00:50 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-961155 image list                                                          | test-preload-961155  | jenkins | v1.34.0 | 10 Dec 24 00:50 UTC | 10 Dec 24 00:50 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/10 00:49:28
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 00:49:28.722543  120345 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:49:28.722665  120345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:49:28.722676  120345 out.go:358] Setting ErrFile to fd 2...
	I1210 00:49:28.722680  120345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:49:28.722851  120345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 00:49:28.723394  120345 out.go:352] Setting JSON to false
	I1210 00:49:28.724277  120345 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":9120,"bootTime":1733782649,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:49:28.724372  120345 start.go:139] virtualization: kvm guest
	I1210 00:49:28.726278  120345 out.go:177] * [test-preload-961155] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 00:49:28.727444  120345 notify.go:220] Checking for updates...
	I1210 00:49:28.727449  120345 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 00:49:28.728580  120345 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:49:28.729775  120345 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:49:28.730794  120345 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:49:28.731778  120345 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:49:28.732826  120345 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:49:28.734179  120345 config.go:182] Loaded profile config "test-preload-961155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1210 00:49:28.734544  120345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:49:28.734634  120345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:49:28.749228  120345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36779
	I1210 00:49:28.749689  120345 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:49:28.750185  120345 main.go:141] libmachine: Using API Version  1
	I1210 00:49:28.750227  120345 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:49:28.750608  120345 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:49:28.750783  120345 main.go:141] libmachine: (test-preload-961155) Calling .DriverName
	I1210 00:49:28.752104  120345 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1210 00:49:28.753139  120345 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:49:28.753433  120345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:49:28.753476  120345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:49:28.767515  120345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38177
	I1210 00:49:28.767927  120345 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:49:28.768325  120345 main.go:141] libmachine: Using API Version  1
	I1210 00:49:28.768347  120345 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:49:28.768642  120345 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:49:28.768804  120345 main.go:141] libmachine: (test-preload-961155) Calling .DriverName
	I1210 00:49:28.800596  120345 out.go:177] * Using the kvm2 driver based on existing profile
	I1210 00:49:28.801792  120345 start.go:297] selected driver: kvm2
	I1210 00:49:28.801803  120345 start.go:901] validating driver "kvm2" against &{Name:test-preload-961155 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-961155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.111 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:49:28.801911  120345 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:49:28.802648  120345 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:49:28.802750  120345 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 00:49:28.817327  120345 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1210 00:49:28.817705  120345 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:49:28.817734  120345 cni.go:84] Creating CNI manager for ""
	I1210 00:49:28.817787  120345 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:49:28.817853  120345 start.go:340] cluster config:
	{Name:test-preload-961155 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-961155 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.111 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:49:28.817957  120345 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:49:28.820365  120345 out.go:177] * Starting "test-preload-961155" primary control-plane node in "test-preload-961155" cluster
	I1210 00:49:28.821337  120345 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1210 00:49:28.847329  120345 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1210 00:49:28.847347  120345 cache.go:56] Caching tarball of preloaded images
	I1210 00:49:28.847477  120345 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1210 00:49:28.848907  120345 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1210 00:49:28.850010  120345 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1210 00:49:28.878040  120345 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1210 00:49:32.966852  120345 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1210 00:49:32.966956  120345 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1210 00:49:33.825210  120345 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I1210 00:49:33.825352  120345 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/test-preload-961155/config.json ...
	I1210 00:49:33.825619  120345 start.go:360] acquireMachinesLock for test-preload-961155: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:49:33.825703  120345 start.go:364] duration metric: took 55.021µs to acquireMachinesLock for "test-preload-961155"
	I1210 00:49:33.825723  120345 start.go:96] Skipping create...Using existing machine configuration
	I1210 00:49:33.825729  120345 fix.go:54] fixHost starting: 
	I1210 00:49:33.826026  120345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:49:33.826074  120345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:49:33.840571  120345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35765
	I1210 00:49:33.841059  120345 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:49:33.841566  120345 main.go:141] libmachine: Using API Version  1
	I1210 00:49:33.841587  120345 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:49:33.841910  120345 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:49:33.842091  120345 main.go:141] libmachine: (test-preload-961155) Calling .DriverName
	I1210 00:49:33.842233  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetState
	I1210 00:49:33.843856  120345 fix.go:112] recreateIfNeeded on test-preload-961155: state=Stopped err=<nil>
	I1210 00:49:33.843877  120345 main.go:141] libmachine: (test-preload-961155) Calling .DriverName
	W1210 00:49:33.844033  120345 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 00:49:33.845810  120345 out.go:177] * Restarting existing kvm2 VM for "test-preload-961155" ...
	I1210 00:49:33.846942  120345 main.go:141] libmachine: (test-preload-961155) Calling .Start
	I1210 00:49:33.847089  120345 main.go:141] libmachine: (test-preload-961155) Ensuring networks are active...
	I1210 00:49:33.847716  120345 main.go:141] libmachine: (test-preload-961155) Ensuring network default is active
	I1210 00:49:33.847977  120345 main.go:141] libmachine: (test-preload-961155) Ensuring network mk-test-preload-961155 is active
	I1210 00:49:33.848324  120345 main.go:141] libmachine: (test-preload-961155) Getting domain xml...
	I1210 00:49:33.848981  120345 main.go:141] libmachine: (test-preload-961155) Creating domain...
	I1210 00:49:35.024435  120345 main.go:141] libmachine: (test-preload-961155) Waiting to get IP...
	I1210 00:49:35.025474  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:35.025775  120345 main.go:141] libmachine: (test-preload-961155) DBG | unable to find current IP address of domain test-preload-961155 in network mk-test-preload-961155
	I1210 00:49:35.025855  120345 main.go:141] libmachine: (test-preload-961155) DBG | I1210 00:49:35.025779  120396 retry.go:31] will retry after 191.338591ms: waiting for machine to come up
	I1210 00:49:35.219360  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:35.219849  120345 main.go:141] libmachine: (test-preload-961155) DBG | unable to find current IP address of domain test-preload-961155 in network mk-test-preload-961155
	I1210 00:49:35.219876  120345 main.go:141] libmachine: (test-preload-961155) DBG | I1210 00:49:35.219785  120396 retry.go:31] will retry after 373.424981ms: waiting for machine to come up
	I1210 00:49:35.594373  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:35.594830  120345 main.go:141] libmachine: (test-preload-961155) DBG | unable to find current IP address of domain test-preload-961155 in network mk-test-preload-961155
	I1210 00:49:35.594852  120345 main.go:141] libmachine: (test-preload-961155) DBG | I1210 00:49:35.594804  120396 retry.go:31] will retry after 314.775332ms: waiting for machine to come up
	I1210 00:49:35.911296  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:35.911700  120345 main.go:141] libmachine: (test-preload-961155) DBG | unable to find current IP address of domain test-preload-961155 in network mk-test-preload-961155
	I1210 00:49:35.911722  120345 main.go:141] libmachine: (test-preload-961155) DBG | I1210 00:49:35.911649  120396 retry.go:31] will retry after 495.588422ms: waiting for machine to come up
	I1210 00:49:36.408462  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:36.408872  120345 main.go:141] libmachine: (test-preload-961155) DBG | unable to find current IP address of domain test-preload-961155 in network mk-test-preload-961155
	I1210 00:49:36.408892  120345 main.go:141] libmachine: (test-preload-961155) DBG | I1210 00:49:36.408852  120396 retry.go:31] will retry after 645.982225ms: waiting for machine to come up
	I1210 00:49:37.056587  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:37.056929  120345 main.go:141] libmachine: (test-preload-961155) DBG | unable to find current IP address of domain test-preload-961155 in network mk-test-preload-961155
	I1210 00:49:37.056958  120345 main.go:141] libmachine: (test-preload-961155) DBG | I1210 00:49:37.056870  120396 retry.go:31] will retry after 687.398733ms: waiting for machine to come up
	I1210 00:49:37.745688  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:37.746096  120345 main.go:141] libmachine: (test-preload-961155) DBG | unable to find current IP address of domain test-preload-961155 in network mk-test-preload-961155
	I1210 00:49:37.746123  120345 main.go:141] libmachine: (test-preload-961155) DBG | I1210 00:49:37.746048  120396 retry.go:31] will retry after 1.05488589s: waiting for machine to come up
	I1210 00:49:38.802014  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:38.802397  120345 main.go:141] libmachine: (test-preload-961155) DBG | unable to find current IP address of domain test-preload-961155 in network mk-test-preload-961155
	I1210 00:49:38.802433  120345 main.go:141] libmachine: (test-preload-961155) DBG | I1210 00:49:38.802356  120396 retry.go:31] will retry after 961.193659ms: waiting for machine to come up
	I1210 00:49:39.764726  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:39.765132  120345 main.go:141] libmachine: (test-preload-961155) DBG | unable to find current IP address of domain test-preload-961155 in network mk-test-preload-961155
	I1210 00:49:39.765155  120345 main.go:141] libmachine: (test-preload-961155) DBG | I1210 00:49:39.765083  120396 retry.go:31] will retry after 1.756681062s: waiting for machine to come up
	I1210 00:49:41.523041  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:41.523422  120345 main.go:141] libmachine: (test-preload-961155) DBG | unable to find current IP address of domain test-preload-961155 in network mk-test-preload-961155
	I1210 00:49:41.523453  120345 main.go:141] libmachine: (test-preload-961155) DBG | I1210 00:49:41.523377  120396 retry.go:31] will retry after 1.518831221s: waiting for machine to come up
	I1210 00:49:43.044268  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:43.044764  120345 main.go:141] libmachine: (test-preload-961155) DBG | unable to find current IP address of domain test-preload-961155 in network mk-test-preload-961155
	I1210 00:49:43.044798  120345 main.go:141] libmachine: (test-preload-961155) DBG | I1210 00:49:43.044686  120396 retry.go:31] will retry after 1.794469283s: waiting for machine to come up
	I1210 00:49:44.841150  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:44.841633  120345 main.go:141] libmachine: (test-preload-961155) DBG | unable to find current IP address of domain test-preload-961155 in network mk-test-preload-961155
	I1210 00:49:44.841662  120345 main.go:141] libmachine: (test-preload-961155) DBG | I1210 00:49:44.841607  120396 retry.go:31] will retry after 2.435157745s: waiting for machine to come up
	I1210 00:49:47.277911  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:47.278307  120345 main.go:141] libmachine: (test-preload-961155) DBG | unable to find current IP address of domain test-preload-961155 in network mk-test-preload-961155
	I1210 00:49:47.278333  120345 main.go:141] libmachine: (test-preload-961155) DBG | I1210 00:49:47.278255  120396 retry.go:31] will retry after 3.74559165s: waiting for machine to come up
	I1210 00:49:51.026783  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:51.027292  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has current primary IP address 192.168.39.111 and MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:51.027315  120345 main.go:141] libmachine: (test-preload-961155) Found IP for machine: 192.168.39.111
	I1210 00:49:51.027324  120345 main.go:141] libmachine: (test-preload-961155) Reserving static IP address...
	I1210 00:49:51.027785  120345 main.go:141] libmachine: (test-preload-961155) DBG | found host DHCP lease matching {name: "test-preload-961155", mac: "52:54:00:8b:6b:1b", ip: "192.168.39.111"} in network mk-test-preload-961155: {Iface:virbr1 ExpiryTime:2024-12-10 01:49:44 +0000 UTC Type:0 Mac:52:54:00:8b:6b:1b Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-961155 Clientid:01:52:54:00:8b:6b:1b}
	I1210 00:49:51.027817  120345 main.go:141] libmachine: (test-preload-961155) DBG | skip adding static IP to network mk-test-preload-961155 - found existing host DHCP lease matching {name: "test-preload-961155", mac: "52:54:00:8b:6b:1b", ip: "192.168.39.111"}
	I1210 00:49:51.027831  120345 main.go:141] libmachine: (test-preload-961155) Reserved static IP address: 192.168.39.111
	I1210 00:49:51.027844  120345 main.go:141] libmachine: (test-preload-961155) Waiting for SSH to be available...
	I1210 00:49:51.027857  120345 main.go:141] libmachine: (test-preload-961155) DBG | Getting to WaitForSSH function...
	I1210 00:49:51.030136  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:51.030489  120345 main.go:141] libmachine: (test-preload-961155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:1b", ip: ""} in network mk-test-preload-961155: {Iface:virbr1 ExpiryTime:2024-12-10 01:49:44 +0000 UTC Type:0 Mac:52:54:00:8b:6b:1b Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-961155 Clientid:01:52:54:00:8b:6b:1b}
	I1210 00:49:51.030514  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined IP address 192.168.39.111 and MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:51.030645  120345 main.go:141] libmachine: (test-preload-961155) DBG | Using SSH client type: external
	I1210 00:49:51.030729  120345 main.go:141] libmachine: (test-preload-961155) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/test-preload-961155/id_rsa (-rw-------)
	I1210 00:49:51.030765  120345 main.go:141] libmachine: (test-preload-961155) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/test-preload-961155/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:49:51.030795  120345 main.go:141] libmachine: (test-preload-961155) DBG | About to run SSH command:
	I1210 00:49:51.030807  120345 main.go:141] libmachine: (test-preload-961155) DBG | exit 0
	I1210 00:49:51.150059  120345 main.go:141] libmachine: (test-preload-961155) DBG | SSH cmd err, output: <nil>: 
	I1210 00:49:51.150402  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetConfigRaw
	I1210 00:49:51.151034  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetIP
	I1210 00:49:51.153373  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:51.153664  120345 main.go:141] libmachine: (test-preload-961155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:1b", ip: ""} in network mk-test-preload-961155: {Iface:virbr1 ExpiryTime:2024-12-10 01:49:44 +0000 UTC Type:0 Mac:52:54:00:8b:6b:1b Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-961155 Clientid:01:52:54:00:8b:6b:1b}
	I1210 00:49:51.153708  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined IP address 192.168.39.111 and MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:51.153930  120345 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/test-preload-961155/config.json ...
	I1210 00:49:51.154117  120345 machine.go:93] provisionDockerMachine start ...
	I1210 00:49:51.154136  120345 main.go:141] libmachine: (test-preload-961155) Calling .DriverName
	I1210 00:49:51.154329  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHHostname
	I1210 00:49:51.156446  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:51.156735  120345 main.go:141] libmachine: (test-preload-961155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:1b", ip: ""} in network mk-test-preload-961155: {Iface:virbr1 ExpiryTime:2024-12-10 01:49:44 +0000 UTC Type:0 Mac:52:54:00:8b:6b:1b Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-961155 Clientid:01:52:54:00:8b:6b:1b}
	I1210 00:49:51.156762  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined IP address 192.168.39.111 and MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:51.156881  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHPort
	I1210 00:49:51.157040  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHKeyPath
	I1210 00:49:51.157200  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHKeyPath
	I1210 00:49:51.157335  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHUsername
	I1210 00:49:51.157512  120345 main.go:141] libmachine: Using SSH client type: native
	I1210 00:49:51.157692  120345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I1210 00:49:51.157704  120345 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 00:49:51.250288  120345 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 00:49:51.250317  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetMachineName
	I1210 00:49:51.250548  120345 buildroot.go:166] provisioning hostname "test-preload-961155"
	I1210 00:49:51.250595  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetMachineName
	I1210 00:49:51.250819  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHHostname
	I1210 00:49:51.253663  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:51.254069  120345 main.go:141] libmachine: (test-preload-961155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:1b", ip: ""} in network mk-test-preload-961155: {Iface:virbr1 ExpiryTime:2024-12-10 01:49:44 +0000 UTC Type:0 Mac:52:54:00:8b:6b:1b Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-961155 Clientid:01:52:54:00:8b:6b:1b}
	I1210 00:49:51.254100  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined IP address 192.168.39.111 and MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:51.254227  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHPort
	I1210 00:49:51.254388  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHKeyPath
	I1210 00:49:51.254547  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHKeyPath
	I1210 00:49:51.254672  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHUsername
	I1210 00:49:51.254872  120345 main.go:141] libmachine: Using SSH client type: native
	I1210 00:49:51.255045  120345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I1210 00:49:51.255058  120345 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-961155 && echo "test-preload-961155" | sudo tee /etc/hostname
	I1210 00:49:51.365943  120345 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-961155
	
	I1210 00:49:51.365980  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHHostname
	I1210 00:49:51.368655  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:51.369036  120345 main.go:141] libmachine: (test-preload-961155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:1b", ip: ""} in network mk-test-preload-961155: {Iface:virbr1 ExpiryTime:2024-12-10 01:49:44 +0000 UTC Type:0 Mac:52:54:00:8b:6b:1b Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-961155 Clientid:01:52:54:00:8b:6b:1b}
	I1210 00:49:51.369066  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined IP address 192.168.39.111 and MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:51.369270  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHPort
	I1210 00:49:51.369482  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHKeyPath
	I1210 00:49:51.369627  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHKeyPath
	I1210 00:49:51.369759  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHUsername
	I1210 00:49:51.369971  120345 main.go:141] libmachine: Using SSH client type: native
	I1210 00:49:51.370141  120345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I1210 00:49:51.370157  120345 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-961155' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-961155/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-961155' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 00:49:51.474663  120345 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:49:51.474691  120345 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 00:49:51.474733  120345 buildroot.go:174] setting up certificates
	I1210 00:49:51.474743  120345 provision.go:84] configureAuth start
	I1210 00:49:51.474754  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetMachineName
	I1210 00:49:51.475034  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetIP
	I1210 00:49:51.477596  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:51.477948  120345 main.go:141] libmachine: (test-preload-961155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:1b", ip: ""} in network mk-test-preload-961155: {Iface:virbr1 ExpiryTime:2024-12-10 01:49:44 +0000 UTC Type:0 Mac:52:54:00:8b:6b:1b Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-961155 Clientid:01:52:54:00:8b:6b:1b}
	I1210 00:49:51.477985  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined IP address 192.168.39.111 and MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:51.478157  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHHostname
	I1210 00:49:51.480522  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:51.480884  120345 main.go:141] libmachine: (test-preload-961155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:1b", ip: ""} in network mk-test-preload-961155: {Iface:virbr1 ExpiryTime:2024-12-10 01:49:44 +0000 UTC Type:0 Mac:52:54:00:8b:6b:1b Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-961155 Clientid:01:52:54:00:8b:6b:1b}
	I1210 00:49:51.480913  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined IP address 192.168.39.111 and MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:51.481057  120345 provision.go:143] copyHostCerts
	I1210 00:49:51.481110  120345 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 00:49:51.481130  120345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:49:51.481194  120345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 00:49:51.481282  120345 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 00:49:51.481290  120345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:49:51.481313  120345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 00:49:51.481366  120345 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 00:49:51.481373  120345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:49:51.481393  120345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 00:49:51.481441  120345 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.test-preload-961155 san=[127.0.0.1 192.168.39.111 localhost minikube test-preload-961155]
	I1210 00:49:51.747738  120345 provision.go:177] copyRemoteCerts
	I1210 00:49:51.747794  120345 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 00:49:51.747824  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHHostname
	I1210 00:49:51.750394  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:51.750720  120345 main.go:141] libmachine: (test-preload-961155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:1b", ip: ""} in network mk-test-preload-961155: {Iface:virbr1 ExpiryTime:2024-12-10 01:49:44 +0000 UTC Type:0 Mac:52:54:00:8b:6b:1b Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-961155 Clientid:01:52:54:00:8b:6b:1b}
	I1210 00:49:51.750750  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined IP address 192.168.39.111 and MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:51.750890  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHPort
	I1210 00:49:51.751085  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHKeyPath
	I1210 00:49:51.751200  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHUsername
	I1210 00:49:51.751304  120345 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/test-preload-961155/id_rsa Username:docker}
	I1210 00:49:51.828224  120345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 00:49:51.850045  120345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1210 00:49:51.870691  120345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 00:49:51.891027  120345 provision.go:87] duration metric: took 416.27271ms to configureAuth
	I1210 00:49:51.891052  120345 buildroot.go:189] setting minikube options for container-runtime
	I1210 00:49:51.891199  120345 config.go:182] Loaded profile config "test-preload-961155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1210 00:49:51.891277  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHHostname
	I1210 00:49:51.893825  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:51.894203  120345 main.go:141] libmachine: (test-preload-961155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:1b", ip: ""} in network mk-test-preload-961155: {Iface:virbr1 ExpiryTime:2024-12-10 01:49:44 +0000 UTC Type:0 Mac:52:54:00:8b:6b:1b Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-961155 Clientid:01:52:54:00:8b:6b:1b}
	I1210 00:49:51.894243  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined IP address 192.168.39.111 and MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:51.894414  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHPort
	I1210 00:49:51.894599  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHKeyPath
	I1210 00:49:51.894731  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHKeyPath
	I1210 00:49:51.894898  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHUsername
	I1210 00:49:51.895055  120345 main.go:141] libmachine: Using SSH client type: native
	I1210 00:49:51.895261  120345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I1210 00:49:51.895276  120345 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 00:49:52.096707  120345 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 00:49:52.096737  120345 machine.go:96] duration metric: took 942.606202ms to provisionDockerMachine
	I1210 00:49:52.096753  120345 start.go:293] postStartSetup for "test-preload-961155" (driver="kvm2")
	I1210 00:49:52.096764  120345 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 00:49:52.096781  120345 main.go:141] libmachine: (test-preload-961155) Calling .DriverName
	I1210 00:49:52.097100  120345 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 00:49:52.097149  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHHostname
	I1210 00:49:52.099718  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:52.100071  120345 main.go:141] libmachine: (test-preload-961155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:1b", ip: ""} in network mk-test-preload-961155: {Iface:virbr1 ExpiryTime:2024-12-10 01:49:44 +0000 UTC Type:0 Mac:52:54:00:8b:6b:1b Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-961155 Clientid:01:52:54:00:8b:6b:1b}
	I1210 00:49:52.100099  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined IP address 192.168.39.111 and MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:52.100221  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHPort
	I1210 00:49:52.100410  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHKeyPath
	I1210 00:49:52.100536  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHUsername
	I1210 00:49:52.100640  120345 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/test-preload-961155/id_rsa Username:docker}
	I1210 00:49:52.176918  120345 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 00:49:52.181061  120345 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 00:49:52.181084  120345 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 00:49:52.181149  120345 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 00:49:52.181240  120345 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 00:49:52.181363  120345 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 00:49:52.190239  120345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:49:52.211740  120345 start.go:296] duration metric: took 114.975621ms for postStartSetup
	I1210 00:49:52.211768  120345 fix.go:56] duration metric: took 18.386040211s for fixHost
	I1210 00:49:52.211788  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHHostname
	I1210 00:49:52.214329  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:52.214689  120345 main.go:141] libmachine: (test-preload-961155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:1b", ip: ""} in network mk-test-preload-961155: {Iface:virbr1 ExpiryTime:2024-12-10 01:49:44 +0000 UTC Type:0 Mac:52:54:00:8b:6b:1b Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-961155 Clientid:01:52:54:00:8b:6b:1b}
	I1210 00:49:52.214719  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined IP address 192.168.39.111 and MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:52.214856  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHPort
	I1210 00:49:52.215046  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHKeyPath
	I1210 00:49:52.215205  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHKeyPath
	I1210 00:49:52.215327  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHUsername
	I1210 00:49:52.215471  120345 main.go:141] libmachine: Using SSH client type: native
	I1210 00:49:52.215651  120345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.111 22 <nil> <nil>}
	I1210 00:49:52.215665  120345 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 00:49:52.310660  120345 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733791792.270186593
	
	I1210 00:49:52.310696  120345 fix.go:216] guest clock: 1733791792.270186593
	I1210 00:49:52.310704  120345 fix.go:229] Guest: 2024-12-10 00:49:52.270186593 +0000 UTC Remote: 2024-12-10 00:49:52.211772903 +0000 UTC m=+23.526147980 (delta=58.41369ms)
	I1210 00:49:52.310722  120345 fix.go:200] guest clock delta is within tolerance: 58.41369ms
	I1210 00:49:52.310729  120345 start.go:83] releasing machines lock for "test-preload-961155", held for 18.485011319s
	I1210 00:49:52.310748  120345 main.go:141] libmachine: (test-preload-961155) Calling .DriverName
	I1210 00:49:52.311019  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetIP
	I1210 00:49:52.313767  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:52.314085  120345 main.go:141] libmachine: (test-preload-961155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:1b", ip: ""} in network mk-test-preload-961155: {Iface:virbr1 ExpiryTime:2024-12-10 01:49:44 +0000 UTC Type:0 Mac:52:54:00:8b:6b:1b Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-961155 Clientid:01:52:54:00:8b:6b:1b}
	I1210 00:49:52.314114  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined IP address 192.168.39.111 and MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:52.314275  120345 main.go:141] libmachine: (test-preload-961155) Calling .DriverName
	I1210 00:49:52.314775  120345 main.go:141] libmachine: (test-preload-961155) Calling .DriverName
	I1210 00:49:52.314979  120345 main.go:141] libmachine: (test-preload-961155) Calling .DriverName
	I1210 00:49:52.315077  120345 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 00:49:52.315132  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHHostname
	I1210 00:49:52.315177  120345 ssh_runner.go:195] Run: cat /version.json
	I1210 00:49:52.315199  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHHostname
	I1210 00:49:52.317731  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:52.318109  120345 main.go:141] libmachine: (test-preload-961155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:1b", ip: ""} in network mk-test-preload-961155: {Iface:virbr1 ExpiryTime:2024-12-10 01:49:44 +0000 UTC Type:0 Mac:52:54:00:8b:6b:1b Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-961155 Clientid:01:52:54:00:8b:6b:1b}
	I1210 00:49:52.318136  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined IP address 192.168.39.111 and MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:52.318156  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:52.318266  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHPort
	I1210 00:49:52.318419  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHKeyPath
	I1210 00:49:52.318524  120345 main.go:141] libmachine: (test-preload-961155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:1b", ip: ""} in network mk-test-preload-961155: {Iface:virbr1 ExpiryTime:2024-12-10 01:49:44 +0000 UTC Type:0 Mac:52:54:00:8b:6b:1b Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-961155 Clientid:01:52:54:00:8b:6b:1b}
	I1210 00:49:52.318575  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined IP address 192.168.39.111 and MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:52.318581  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHUsername
	I1210 00:49:52.318745  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHPort
	I1210 00:49:52.318763  120345 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/test-preload-961155/id_rsa Username:docker}
	I1210 00:49:52.318993  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHKeyPath
	I1210 00:49:52.319163  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHUsername
	I1210 00:49:52.319336  120345 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/test-preload-961155/id_rsa Username:docker}
	I1210 00:49:52.411196  120345 ssh_runner.go:195] Run: systemctl --version
	I1210 00:49:52.416601  120345 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 00:49:52.557780  120345 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 00:49:52.563385  120345 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 00:49:52.563451  120345 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 00:49:52.578387  120345 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 00:49:52.578412  120345 start.go:495] detecting cgroup driver to use...
	I1210 00:49:52.578471  120345 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 00:49:52.592496  120345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 00:49:52.604974  120345 docker.go:217] disabling cri-docker service (if available) ...
	I1210 00:49:52.605015  120345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 00:49:52.616530  120345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 00:49:52.628803  120345 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 00:49:52.731344  120345 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 00:49:52.852560  120345 docker.go:233] disabling docker service ...
	I1210 00:49:52.852654  120345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 00:49:52.865674  120345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 00:49:52.877316  120345 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 00:49:53.002396  120345 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 00:49:53.116931  120345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 00:49:53.129899  120345 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 00:49:53.146299  120345 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1210 00:49:53.146362  120345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:49:53.155498  120345 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 00:49:53.155549  120345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:49:53.164632  120345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:49:53.173586  120345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:49:53.182420  120345 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 00:49:53.191499  120345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:49:53.200319  120345 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:49:53.215244  120345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:49:53.224072  120345 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 00:49:53.232092  120345 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 00:49:53.232152  120345 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 00:49:53.243722  120345 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 00:49:53.252164  120345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:49:53.357446  120345 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 00:49:53.443933  120345 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 00:49:53.444010  120345 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 00:49:53.448331  120345 start.go:563] Will wait 60s for crictl version
	I1210 00:49:53.448391  120345 ssh_runner.go:195] Run: which crictl
	I1210 00:49:53.451806  120345 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:49:53.489783  120345 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 00:49:53.489866  120345 ssh_runner.go:195] Run: crio --version
	I1210 00:49:53.514990  120345 ssh_runner.go:195] Run: crio --version
	I1210 00:49:53.542858  120345 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I1210 00:49:53.544281  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetIP
	I1210 00:49:53.547452  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:53.547773  120345 main.go:141] libmachine: (test-preload-961155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:1b", ip: ""} in network mk-test-preload-961155: {Iface:virbr1 ExpiryTime:2024-12-10 01:49:44 +0000 UTC Type:0 Mac:52:54:00:8b:6b:1b Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-961155 Clientid:01:52:54:00:8b:6b:1b}
	I1210 00:49:53.547793  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined IP address 192.168.39.111 and MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:49:53.548010  120345 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 00:49:53.551717  120345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:49:53.563137  120345 kubeadm.go:883] updating cluster {Name:test-preload-961155 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-961155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.111 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 00:49:53.563279  120345 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1210 00:49:53.563342  120345 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:49:53.594871  120345 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1210 00:49:53.594940  120345 ssh_runner.go:195] Run: which lz4
	I1210 00:49:53.598385  120345 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 00:49:53.602150  120345 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 00:49:53.602176  120345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1210 00:49:54.891973  120345 crio.go:462] duration metric: took 1.293613571s to copy over tarball
	I1210 00:49:54.892049  120345 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 00:49:57.177320  120345 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.285234675s)
	I1210 00:49:57.177361  120345 crio.go:469] duration metric: took 2.285356013s to extract the tarball
	I1210 00:49:57.177372  120345 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 00:49:57.217326  120345 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:49:57.254716  120345 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1210 00:49:57.254748  120345 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 00:49:57.254811  120345 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:49:57.254858  120345 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1210 00:49:57.254902  120345 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1210 00:49:57.254927  120345 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1210 00:49:57.254953  120345 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1210 00:49:57.255000  120345 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1210 00:49:57.254906  120345 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1210 00:49:57.254879  120345 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1210 00:49:57.256354  120345 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:49:57.256363  120345 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1210 00:49:57.256366  120345 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1210 00:49:57.256363  120345 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1210 00:49:57.256450  120345 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1210 00:49:57.256462  120345 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1210 00:49:57.256512  120345 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1210 00:49:57.256550  120345 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1210 00:49:57.413379  120345 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1210 00:49:57.415621  120345 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1210 00:49:57.417301  120345 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1210 00:49:57.443313  120345 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1210 00:49:57.458952  120345 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1210 00:49:57.477346  120345 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1210 00:49:57.487106  120345 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1210 00:49:57.487146  120345 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1210 00:49:57.487187  120345 ssh_runner.go:195] Run: which crictl
	I1210 00:49:57.491998  120345 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1210 00:49:57.502879  120345 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1210 00:49:57.502931  120345 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1210 00:49:57.502979  120345 ssh_runner.go:195] Run: which crictl
	I1210 00:49:57.505055  120345 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1210 00:49:57.505094  120345 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1210 00:49:57.505131  120345 ssh_runner.go:195] Run: which crictl
	I1210 00:49:57.579745  120345 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1210 00:49:57.579796  120345 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1210 00:49:57.579849  120345 ssh_runner.go:195] Run: which crictl
	I1210 00:49:57.591368  120345 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1210 00:49:57.591411  120345 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1210 00:49:57.591479  120345 ssh_runner.go:195] Run: which crictl
	I1210 00:49:57.591519  120345 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1210 00:49:57.591554  120345 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1210 00:49:57.591564  120345 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1210 00:49:57.591605  120345 ssh_runner.go:195] Run: which crictl
	I1210 00:49:57.602393  120345 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1210 00:49:57.602429  120345 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1210 00:49:57.602471  120345 ssh_runner.go:195] Run: which crictl
	I1210 00:49:57.602523  120345 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1210 00:49:57.602579  120345 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1210 00:49:57.602589  120345 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1210 00:49:57.602664  120345 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1210 00:49:57.602730  120345 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1210 00:49:57.710801  120345 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1210 00:49:57.710816  120345 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1210 00:49:57.741314  120345 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1210 00:49:57.741356  120345 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1210 00:49:57.742268  120345 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1210 00:49:57.742365  120345 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1210 00:49:57.742422  120345 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1210 00:49:57.865517  120345 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1210 00:49:57.865520  120345 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1210 00:49:57.884080  120345 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1210 00:49:57.888422  120345 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1210 00:49:57.888478  120345 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1210 00:49:57.889608  120345 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1210 00:49:57.889680  120345 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1210 00:49:58.032121  120345 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1210 00:49:58.032188  120345 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1210 00:49:58.032281  120345 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1210 00:49:58.033184  120345 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1210 00:49:58.033233  120345 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1210 00:49:58.033272  120345 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1210 00:49:58.033306  120345 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1210 00:49:58.033314  120345 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1210 00:49:58.033364  120345 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1210 00:49:58.033411  120345 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1210 00:49:58.033461  120345 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1210 00:49:58.035868  120345 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1210 00:49:58.035949  120345 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1210 00:49:58.071538  120345 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1210 00:49:58.071581  120345 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1210 00:49:58.071597  120345 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1210 00:49:58.071621  120345 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1210 00:49:58.071631  120345 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1210 00:49:58.071691  120345 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1210 00:49:58.071711  120345 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1210 00:49:58.071754  120345 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1210 00:49:58.071781  120345 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1210 00:49:58.071812  120345 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1210 00:49:58.076699  120345 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1210 00:49:58.157826  120345 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:50:01.327622  120345 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4: (3.255963245s)
	I1210 00:50:01.327667  120345 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1210 00:50:01.327682  120345 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.169819594s)
	I1210 00:50:01.327703  120345 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1210 00:50:01.327765  120345 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1210 00:50:02.063955  120345 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1210 00:50:02.064004  120345 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1210 00:50:02.064059  120345 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1210 00:50:02.911944  120345 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1210 00:50:02.911997  120345 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I1210 00:50:02.912054  120345 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1210 00:50:03.048318  120345 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1210 00:50:03.048361  120345 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1210 00:50:03.048403  120345 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1210 00:50:03.395360  120345 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1210 00:50:03.395407  120345 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1210 00:50:03.395461  120345 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1210 00:50:04.147816  120345 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1210 00:50:04.147869  120345 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1210 00:50:04.147925  120345 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1210 00:50:06.090519  120345 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (1.942569715s)
	I1210 00:50:06.090546  120345 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1210 00:50:06.090592  120345 cache_images.go:123] Successfully loaded all cached images
	I1210 00:50:06.090600  120345 cache_images.go:92] duration metric: took 8.835838585s to LoadCachedImages
	I1210 00:50:06.090611  120345 kubeadm.go:934] updating node { 192.168.39.111 8443 v1.24.4 crio true true} ...
	I1210 00:50:06.090712  120345 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-961155 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-961155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:50:06.090773  120345 ssh_runner.go:195] Run: crio config
	I1210 00:50:06.137278  120345 cni.go:84] Creating CNI manager for ""
	I1210 00:50:06.137302  120345 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:50:06.137311  120345 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 00:50:06.137330  120345 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.111 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-961155 NodeName:test-preload-961155 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 00:50:06.137465  120345 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.111
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-961155"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 00:50:06.137527  120345 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1210 00:50:06.146587  120345 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 00:50:06.146644  120345 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 00:50:06.154903  120345 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1210 00:50:06.169276  120345 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:50:06.183441  120345 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1210 00:50:06.198061  120345 ssh_runner.go:195] Run: grep 192.168.39.111	control-plane.minikube.internal$ /etc/hosts
	I1210 00:50:06.201133  120345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:50:06.211399  120345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:50:06.323789  120345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:50:06.339695  120345 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/test-preload-961155 for IP: 192.168.39.111
	I1210 00:50:06.339721  120345 certs.go:194] generating shared ca certs ...
	I1210 00:50:06.339742  120345 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:50:06.339945  120345 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 00:50:06.339993  120345 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 00:50:06.340004  120345 certs.go:256] generating profile certs ...
	I1210 00:50:06.340120  120345 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/test-preload-961155/client.key
	I1210 00:50:06.340224  120345 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/test-preload-961155/apiserver.key.134d74d4
	I1210 00:50:06.340282  120345 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/test-preload-961155/proxy-client.key
	I1210 00:50:06.340472  120345 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 00:50:06.340519  120345 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 00:50:06.340534  120345 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:50:06.340568  120345 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 00:50:06.340598  120345 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:50:06.340621  120345 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 00:50:06.340684  120345 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:50:06.341626  120345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:50:06.377016  120345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 00:50:06.405188  120345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:50:06.429034  120345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:50:06.454153  120345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/test-preload-961155/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1210 00:50:06.483261  120345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/test-preload-961155/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 00:50:06.508296  120345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/test-preload-961155/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:50:06.541039  120345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/test-preload-961155/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 00:50:06.562380  120345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 00:50:06.583025  120345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:50:06.603729  120345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 00:50:06.624362  120345 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 00:50:06.638930  120345 ssh_runner.go:195] Run: openssl version
	I1210 00:50:06.643950  120345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 00:50:06.653136  120345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 00:50:06.657022  120345 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 00:50:06.657062  120345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 00:50:06.662231  120345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:50:06.671663  120345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:50:06.680892  120345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:50:06.684751  120345 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:50:06.684785  120345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:50:06.689811  120345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:50:06.698913  120345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 00:50:06.708237  120345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 00:50:06.712006  120345 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 00:50:06.712051  120345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 00:50:06.717196  120345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 00:50:06.726468  120345 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:50:06.730379  120345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 00:50:06.735771  120345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 00:50:06.741022  120345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 00:50:06.746491  120345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 00:50:06.751693  120345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 00:50:06.756758  120345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 00:50:06.761825  120345 kubeadm.go:392] StartCluster: {Name:test-preload-961155 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-961155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.111 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:50:06.761906  120345 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 00:50:06.761943  120345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:50:06.796644  120345 cri.go:89] found id: ""
	I1210 00:50:06.796699  120345 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 00:50:06.805894  120345 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 00:50:06.805916  120345 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 00:50:06.805970  120345 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 00:50:06.814465  120345 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 00:50:06.814897  120345 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-961155" does not appear in /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:50:06.815024  120345 kubeconfig.go:62] /home/jenkins/minikube-integration/20062-79135/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-961155" cluster setting kubeconfig missing "test-preload-961155" context setting]
	I1210 00:50:06.815293  120345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:50:06.815869  120345 kapi.go:59] client config for test-preload-961155: &rest.Config{Host:"https://192.168.39.111:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/test-preload-961155/client.crt", KeyFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/test-preload-961155/client.key", CAFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 00:50:06.816546  120345 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 00:50:06.825293  120345 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.111
	I1210 00:50:06.825324  120345 kubeadm.go:1160] stopping kube-system containers ...
	I1210 00:50:06.825336  120345 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 00:50:06.825377  120345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:50:06.857836  120345 cri.go:89] found id: ""
	I1210 00:50:06.857897  120345 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 00:50:06.872715  120345 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:50:06.881098  120345 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:50:06.881112  120345 kubeadm.go:157] found existing configuration files:
	
	I1210 00:50:06.881144  120345 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:50:06.889071  120345 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:50:06.889125  120345 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:50:06.897217  120345 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:50:06.904963  120345 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:50:06.905020  120345 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:50:06.912957  120345 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:50:06.920569  120345 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:50:06.920614  120345 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:50:06.928763  120345 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:50:06.936663  120345 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:50:06.936724  120345 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:50:06.945023  120345 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:50:06.953136  120345 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:50:07.044137  120345 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:50:07.736276  120345 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:50:08.015280  120345 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:50:08.079011  120345 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:50:08.173093  120345 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:50:08.173206  120345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:50:08.674046  120345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:50:09.174108  120345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:50:09.187987  120345 api_server.go:72] duration metric: took 1.014895921s to wait for apiserver process to appear ...
	I1210 00:50:09.188017  120345 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:50:09.188041  120345 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I1210 00:50:09.188579  120345 api_server.go:269] stopped: https://192.168.39.111:8443/healthz: Get "https://192.168.39.111:8443/healthz": dial tcp 192.168.39.111:8443: connect: connection refused
	I1210 00:50:09.688088  120345 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I1210 00:50:09.688666  120345 api_server.go:269] stopped: https://192.168.39.111:8443/healthz: Get "https://192.168.39.111:8443/healthz": dial tcp 192.168.39.111:8443: connect: connection refused
	I1210 00:50:10.188222  120345 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I1210 00:50:12.850072  120345 api_server.go:279] https://192.168.39.111:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 00:50:12.850103  120345 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 00:50:12.850120  120345 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I1210 00:50:12.880784  120345 api_server.go:279] https://192.168.39.111:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 00:50:12.880816  120345 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 00:50:13.188171  120345 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I1210 00:50:13.192985  120345 api_server.go:279] https://192.168.39.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 00:50:13.193012  120345 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 00:50:13.688553  120345 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I1210 00:50:13.693293  120345 api_server.go:279] https://192.168.39.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 00:50:13.693330  120345 api_server.go:103] status: https://192.168.39.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 00:50:14.188202  120345 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I1210 00:50:14.194086  120345 api_server.go:279] https://192.168.39.111:8443/healthz returned 200:
	ok
	I1210 00:50:14.200027  120345 api_server.go:141] control plane version: v1.24.4
	I1210 00:50:14.200052  120345 api_server.go:131] duration metric: took 5.012028431s to wait for apiserver health ...
	I1210 00:50:14.200061  120345 cni.go:84] Creating CNI manager for ""
	I1210 00:50:14.200067  120345 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:50:14.201428  120345 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 00:50:14.202515  120345 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 00:50:14.212279  120345 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 00:50:14.228803  120345 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:50:14.228873  120345 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 00:50:14.228893  120345 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 00:50:14.243072  120345 system_pods.go:59] 8 kube-system pods found
	I1210 00:50:14.243097  120345 system_pods.go:61] "coredns-6d4b75cb6d-7lfvr" [23cf24d3-e8ba-4d06-98f2-25bdb6b0936c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 00:50:14.243103  120345 system_pods.go:61] "coredns-6d4b75cb6d-kdr9n" [aa22d4eb-3eed-4b48-8b1b-cabdb10a82b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 00:50:14.243107  120345 system_pods.go:61] "etcd-test-preload-961155" [df2cda45-4427-4eed-8b6a-c4d0d746fa62] Running
	I1210 00:50:14.243121  120345 system_pods.go:61] "kube-apiserver-test-preload-961155" [7e34e4ce-c5b6-4597-9c0a-f55edf37e565] Running
	I1210 00:50:14.243128  120345 system_pods.go:61] "kube-controller-manager-test-preload-961155" [eac29c76-3a59-4162-9426-30dc8083e22e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 00:50:14.243132  120345 system_pods.go:61] "kube-proxy-ghvwm" [62269a5e-0178-4d9b-b9a2-3392247174df] Running
	I1210 00:50:14.243136  120345 system_pods.go:61] "kube-scheduler-test-preload-961155" [7d9413ca-7e88-4ace-a23b-88b3247dcb75] Running
	I1210 00:50:14.243139  120345 system_pods.go:61] "storage-provisioner" [7553e1ab-e31b-4a36-aa66-ff137c4f6202] Running
	I1210 00:50:14.243143  120345 system_pods.go:74] duration metric: took 14.327628ms to wait for pod list to return data ...
	I1210 00:50:14.243149  120345 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:50:14.245959  120345 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:50:14.245982  120345 node_conditions.go:123] node cpu capacity is 2
	I1210 00:50:14.245996  120345 node_conditions.go:105] duration metric: took 2.841082ms to run NodePressure ...
	I1210 00:50:14.246027  120345 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:50:14.434360  120345 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 00:50:14.442549  120345 kubeadm.go:739] kubelet initialised
	I1210 00:50:14.442589  120345 kubeadm.go:740] duration metric: took 8.201253ms waiting for restarted kubelet to initialise ...
	I1210 00:50:14.442601  120345 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:50:14.449715  120345 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-7lfvr" in "kube-system" namespace to be "Ready" ...
	I1210 00:50:14.455804  120345 pod_ready.go:98] node "test-preload-961155" hosting pod "coredns-6d4b75cb6d-7lfvr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-961155" has status "Ready":"False"
	I1210 00:50:14.455826  120345 pod_ready.go:82] duration metric: took 6.087407ms for pod "coredns-6d4b75cb6d-7lfvr" in "kube-system" namespace to be "Ready" ...
	E1210 00:50:14.455835  120345 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-961155" hosting pod "coredns-6d4b75cb6d-7lfvr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-961155" has status "Ready":"False"
	I1210 00:50:14.455844  120345 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-kdr9n" in "kube-system" namespace to be "Ready" ...
	I1210 00:50:14.460996  120345 pod_ready.go:98] node "test-preload-961155" hosting pod "coredns-6d4b75cb6d-kdr9n" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-961155" has status "Ready":"False"
	I1210 00:50:14.461013  120345 pod_ready.go:82] duration metric: took 5.161797ms for pod "coredns-6d4b75cb6d-kdr9n" in "kube-system" namespace to be "Ready" ...
	E1210 00:50:14.461021  120345 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-961155" hosting pod "coredns-6d4b75cb6d-kdr9n" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-961155" has status "Ready":"False"
	I1210 00:50:14.461026  120345 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-961155" in "kube-system" namespace to be "Ready" ...
	I1210 00:50:14.466166  120345 pod_ready.go:98] node "test-preload-961155" hosting pod "etcd-test-preload-961155" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-961155" has status "Ready":"False"
	I1210 00:50:14.466193  120345 pod_ready.go:82] duration metric: took 5.158493ms for pod "etcd-test-preload-961155" in "kube-system" namespace to be "Ready" ...
	E1210 00:50:14.466205  120345 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-961155" hosting pod "etcd-test-preload-961155" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-961155" has status "Ready":"False"
	I1210 00:50:14.466214  120345 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-961155" in "kube-system" namespace to be "Ready" ...
	I1210 00:50:14.637433  120345 pod_ready.go:98] node "test-preload-961155" hosting pod "kube-apiserver-test-preload-961155" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-961155" has status "Ready":"False"
	I1210 00:50:14.637469  120345 pod_ready.go:82] duration metric: took 171.240245ms for pod "kube-apiserver-test-preload-961155" in "kube-system" namespace to be "Ready" ...
	E1210 00:50:14.637484  120345 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-961155" hosting pod "kube-apiserver-test-preload-961155" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-961155" has status "Ready":"False"
	I1210 00:50:14.637494  120345 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-961155" in "kube-system" namespace to be "Ready" ...
	I1210 00:50:15.034867  120345 pod_ready.go:98] node "test-preload-961155" hosting pod "kube-controller-manager-test-preload-961155" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-961155" has status "Ready":"False"
	I1210 00:50:15.034902  120345 pod_ready.go:82] duration metric: took 397.39459ms for pod "kube-controller-manager-test-preload-961155" in "kube-system" namespace to be "Ready" ...
	E1210 00:50:15.034915  120345 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-961155" hosting pod "kube-controller-manager-test-preload-961155" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-961155" has status "Ready":"False"
	I1210 00:50:15.034925  120345 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-ghvwm" in "kube-system" namespace to be "Ready" ...
	I1210 00:50:15.433409  120345 pod_ready.go:98] node "test-preload-961155" hosting pod "kube-proxy-ghvwm" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-961155" has status "Ready":"False"
	I1210 00:50:15.433437  120345 pod_ready.go:82] duration metric: took 398.50139ms for pod "kube-proxy-ghvwm" in "kube-system" namespace to be "Ready" ...
	E1210 00:50:15.433447  120345 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-961155" hosting pod "kube-proxy-ghvwm" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-961155" has status "Ready":"False"
	I1210 00:50:15.433453  120345 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-961155" in "kube-system" namespace to be "Ready" ...
	I1210 00:50:15.832473  120345 pod_ready.go:98] node "test-preload-961155" hosting pod "kube-scheduler-test-preload-961155" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-961155" has status "Ready":"False"
	I1210 00:50:15.832500  120345 pod_ready.go:82] duration metric: took 399.041112ms for pod "kube-scheduler-test-preload-961155" in "kube-system" namespace to be "Ready" ...
	E1210 00:50:15.832511  120345 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-961155" hosting pod "kube-scheduler-test-preload-961155" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-961155" has status "Ready":"False"
	I1210 00:50:15.832524  120345 pod_ready.go:39] duration metric: took 1.389912306s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:50:15.832542  120345 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 00:50:15.843380  120345 ops.go:34] apiserver oom_adj: -16
	I1210 00:50:15.843404  120345 kubeadm.go:597] duration metric: took 9.037479152s to restartPrimaryControlPlane
	I1210 00:50:15.843415  120345 kubeadm.go:394] duration metric: took 9.081594158s to StartCluster
	I1210 00:50:15.843435  120345 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:50:15.843507  120345 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:50:15.844202  120345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:50:15.844413  120345 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.111 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:50:15.844483  120345 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 00:50:15.844584  120345 addons.go:69] Setting storage-provisioner=true in profile "test-preload-961155"
	I1210 00:50:15.844602  120345 addons.go:234] Setting addon storage-provisioner=true in "test-preload-961155"
	W1210 00:50:15.844614  120345 addons.go:243] addon storage-provisioner should already be in state true
	I1210 00:50:15.844600  120345 addons.go:69] Setting default-storageclass=true in profile "test-preload-961155"
	I1210 00:50:15.844640  120345 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-961155"
	I1210 00:50:15.844663  120345 host.go:66] Checking if "test-preload-961155" exists ...
	I1210 00:50:15.844690  120345 config.go:182] Loaded profile config "test-preload-961155": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1210 00:50:15.845058  120345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:50:15.845096  120345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:50:15.845160  120345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:50:15.845177  120345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:50:15.846807  120345 out.go:177] * Verifying Kubernetes components...
	I1210 00:50:15.848094  120345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:50:15.860393  120345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44307
	I1210 00:50:15.860889  120345 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:50:15.861368  120345 main.go:141] libmachine: Using API Version  1
	I1210 00:50:15.861395  120345 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:50:15.861704  120345 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:50:15.861942  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetState
	I1210 00:50:15.864257  120345 kapi.go:59] client config for test-preload-961155: &rest.Config{Host:"https://192.168.39.111:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/test-preload-961155/client.crt", KeyFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/test-preload-961155/client.key", CAFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 00:50:15.864533  120345 addons.go:234] Setting addon default-storageclass=true in "test-preload-961155"
	W1210 00:50:15.864553  120345 addons.go:243] addon default-storageclass should already be in state true
	I1210 00:50:15.864605  120345 host.go:66] Checking if "test-preload-961155" exists ...
	I1210 00:50:15.864798  120345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33391
	I1210 00:50:15.864967  120345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:50:15.865014  120345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:50:15.865261  120345 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:50:15.865720  120345 main.go:141] libmachine: Using API Version  1
	I1210 00:50:15.865740  120345 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:50:15.866047  120345 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:50:15.866511  120345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:50:15.866543  120345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:50:15.879425  120345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42453
	I1210 00:50:15.879879  120345 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:50:15.880301  120345 main.go:141] libmachine: Using API Version  1
	I1210 00:50:15.880320  120345 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:50:15.880621  120345 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:50:15.881053  120345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:50:15.881091  120345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:50:15.885402  120345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36647
	I1210 00:50:15.907174  120345 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:50:15.907706  120345 main.go:141] libmachine: Using API Version  1
	I1210 00:50:15.907734  120345 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:50:15.908051  120345 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:50:15.908331  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetState
	I1210 00:50:15.910014  120345 main.go:141] libmachine: (test-preload-961155) Calling .DriverName
	I1210 00:50:15.911903  120345 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:50:15.913181  120345 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:50:15.913201  120345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 00:50:15.913222  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHHostname
	I1210 00:50:15.916364  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:50:15.916799  120345 main.go:141] libmachine: (test-preload-961155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:1b", ip: ""} in network mk-test-preload-961155: {Iface:virbr1 ExpiryTime:2024-12-10 01:49:44 +0000 UTC Type:0 Mac:52:54:00:8b:6b:1b Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-961155 Clientid:01:52:54:00:8b:6b:1b}
	I1210 00:50:15.916828  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined IP address 192.168.39.111 and MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:50:15.916971  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHPort
	I1210 00:50:15.917176  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHKeyPath
	I1210 00:50:15.917367  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHUsername
	I1210 00:50:15.917508  120345 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/test-preload-961155/id_rsa Username:docker}
	I1210 00:50:15.924885  120345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33561
	I1210 00:50:15.925401  120345 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:50:15.925929  120345 main.go:141] libmachine: Using API Version  1
	I1210 00:50:15.925952  120345 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:50:15.926305  120345 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:50:15.926520  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetState
	I1210 00:50:15.928055  120345 main.go:141] libmachine: (test-preload-961155) Calling .DriverName
	I1210 00:50:15.928311  120345 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 00:50:15.928328  120345 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 00:50:15.928348  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHHostname
	I1210 00:50:15.931474  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:50:15.931894  120345 main.go:141] libmachine: (test-preload-961155) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:1b", ip: ""} in network mk-test-preload-961155: {Iface:virbr1 ExpiryTime:2024-12-10 01:49:44 +0000 UTC Type:0 Mac:52:54:00:8b:6b:1b Iaid: IPaddr:192.168.39.111 Prefix:24 Hostname:test-preload-961155 Clientid:01:52:54:00:8b:6b:1b}
	I1210 00:50:15.931922  120345 main.go:141] libmachine: (test-preload-961155) DBG | domain test-preload-961155 has defined IP address 192.168.39.111 and MAC address 52:54:00:8b:6b:1b in network mk-test-preload-961155
	I1210 00:50:15.932077  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHPort
	I1210 00:50:15.932268  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHKeyPath
	I1210 00:50:15.932408  120345 main.go:141] libmachine: (test-preload-961155) Calling .GetSSHUsername
	I1210 00:50:15.932539  120345 sshutil.go:53] new ssh client: &{IP:192.168.39.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/test-preload-961155/id_rsa Username:docker}
	I1210 00:50:16.004656  120345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:50:16.023573  120345 node_ready.go:35] waiting up to 6m0s for node "test-preload-961155" to be "Ready" ...
	I1210 00:50:16.096963  120345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 00:50:16.117672  120345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:50:17.029366  120345 main.go:141] libmachine: Making call to close driver server
	I1210 00:50:17.029395  120345 main.go:141] libmachine: (test-preload-961155) Calling .Close
	I1210 00:50:17.029442  120345 main.go:141] libmachine: Making call to close driver server
	I1210 00:50:17.029460  120345 main.go:141] libmachine: (test-preload-961155) Calling .Close
	I1210 00:50:17.029702  120345 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:50:17.029716  120345 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:50:17.029721  120345 main.go:141] libmachine: (test-preload-961155) DBG | Closing plugin on server side
	I1210 00:50:17.029724  120345 main.go:141] libmachine: Making call to close driver server
	I1210 00:50:17.029784  120345 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:50:17.029789  120345 main.go:141] libmachine: (test-preload-961155) Calling .Close
	I1210 00:50:17.029788  120345 main.go:141] libmachine: (test-preload-961155) DBG | Closing plugin on server side
	I1210 00:50:17.029798  120345 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:50:17.029836  120345 main.go:141] libmachine: Making call to close driver server
	I1210 00:50:17.029850  120345 main.go:141] libmachine: (test-preload-961155) Calling .Close
	I1210 00:50:17.029975  120345 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:50:17.029988  120345 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:50:17.030205  120345 main.go:141] libmachine: (test-preload-961155) DBG | Closing plugin on server side
	I1210 00:50:17.030216  120345 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:50:17.030228  120345 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:50:17.035378  120345 main.go:141] libmachine: Making call to close driver server
	I1210 00:50:17.035394  120345 main.go:141] libmachine: (test-preload-961155) Calling .Close
	I1210 00:50:17.035621  120345 main.go:141] libmachine: (test-preload-961155) DBG | Closing plugin on server side
	I1210 00:50:17.035658  120345 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:50:17.035678  120345 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:50:17.037419  120345 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1210 00:50:17.038506  120345 addons.go:510] duration metric: took 1.194032977s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 00:50:18.027253  120345 node_ready.go:53] node "test-preload-961155" has status "Ready":"False"
	I1210 00:50:20.027431  120345 node_ready.go:53] node "test-preload-961155" has status "Ready":"False"
	I1210 00:50:22.027553  120345 node_ready.go:53] node "test-preload-961155" has status "Ready":"False"
	I1210 00:50:23.528162  120345 node_ready.go:49] node "test-preload-961155" has status "Ready":"True"
	I1210 00:50:23.528195  120345 node_ready.go:38] duration metric: took 7.504596893s for node "test-preload-961155" to be "Ready" ...
	I1210 00:50:23.528205  120345 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:50:23.532552  120345 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-7lfvr" in "kube-system" namespace to be "Ready" ...
	I1210 00:50:23.536609  120345 pod_ready.go:93] pod "coredns-6d4b75cb6d-7lfvr" in "kube-system" namespace has status "Ready":"True"
	I1210 00:50:23.536631  120345 pod_ready.go:82] duration metric: took 4.057407ms for pod "coredns-6d4b75cb6d-7lfvr" in "kube-system" namespace to be "Ready" ...
	I1210 00:50:23.536642  120345 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-961155" in "kube-system" namespace to be "Ready" ...
	I1210 00:50:23.540358  120345 pod_ready.go:93] pod "etcd-test-preload-961155" in "kube-system" namespace has status "Ready":"True"
	I1210 00:50:23.540377  120345 pod_ready.go:82] duration metric: took 3.728343ms for pod "etcd-test-preload-961155" in "kube-system" namespace to be "Ready" ...
	I1210 00:50:23.540386  120345 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-961155" in "kube-system" namespace to be "Ready" ...
	I1210 00:50:23.544167  120345 pod_ready.go:93] pod "kube-apiserver-test-preload-961155" in "kube-system" namespace has status "Ready":"True"
	I1210 00:50:23.544184  120345 pod_ready.go:82] duration metric: took 3.790823ms for pod "kube-apiserver-test-preload-961155" in "kube-system" namespace to be "Ready" ...
	I1210 00:50:23.544194  120345 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-961155" in "kube-system" namespace to be "Ready" ...
	I1210 00:50:23.548682  120345 pod_ready.go:93] pod "kube-controller-manager-test-preload-961155" in "kube-system" namespace has status "Ready":"True"
	I1210 00:50:23.548702  120345 pod_ready.go:82] duration metric: took 4.500304ms for pod "kube-controller-manager-test-preload-961155" in "kube-system" namespace to be "Ready" ...
	I1210 00:50:23.548712  120345 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ghvwm" in "kube-system" namespace to be "Ready" ...
	I1210 00:50:23.927500  120345 pod_ready.go:93] pod "kube-proxy-ghvwm" in "kube-system" namespace has status "Ready":"True"
	I1210 00:50:23.927522  120345 pod_ready.go:82] duration metric: took 378.803329ms for pod "kube-proxy-ghvwm" in "kube-system" namespace to be "Ready" ...
	I1210 00:50:23.927532  120345 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-961155" in "kube-system" namespace to be "Ready" ...
	I1210 00:50:24.328207  120345 pod_ready.go:93] pod "kube-scheduler-test-preload-961155" in "kube-system" namespace has status "Ready":"True"
	I1210 00:50:24.328230  120345 pod_ready.go:82] duration metric: took 400.692022ms for pod "kube-scheduler-test-preload-961155" in "kube-system" namespace to be "Ready" ...
	I1210 00:50:24.328241  120345 pod_ready.go:39] duration metric: took 800.023224ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 00:50:24.328264  120345 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:50:24.328315  120345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:50:24.342185  120345 api_server.go:72] duration metric: took 8.497739609s to wait for apiserver process to appear ...
	I1210 00:50:24.342214  120345 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:50:24.342243  120345 api_server.go:253] Checking apiserver healthz at https://192.168.39.111:8443/healthz ...
	I1210 00:50:24.347137  120345 api_server.go:279] https://192.168.39.111:8443/healthz returned 200:
	ok
	I1210 00:50:24.347992  120345 api_server.go:141] control plane version: v1.24.4
	I1210 00:50:24.348015  120345 api_server.go:131] duration metric: took 5.795469ms to wait for apiserver health ...
	I1210 00:50:24.348024  120345 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:50:24.530161  120345 system_pods.go:59] 7 kube-system pods found
	I1210 00:50:24.530198  120345 system_pods.go:61] "coredns-6d4b75cb6d-7lfvr" [23cf24d3-e8ba-4d06-98f2-25bdb6b0936c] Running
	I1210 00:50:24.530206  120345 system_pods.go:61] "etcd-test-preload-961155" [df2cda45-4427-4eed-8b6a-c4d0d746fa62] Running
	I1210 00:50:24.530212  120345 system_pods.go:61] "kube-apiserver-test-preload-961155" [7e34e4ce-c5b6-4597-9c0a-f55edf37e565] Running
	I1210 00:50:24.530216  120345 system_pods.go:61] "kube-controller-manager-test-preload-961155" [eac29c76-3a59-4162-9426-30dc8083e22e] Running
	I1210 00:50:24.530221  120345 system_pods.go:61] "kube-proxy-ghvwm" [62269a5e-0178-4d9b-b9a2-3392247174df] Running
	I1210 00:50:24.530225  120345 system_pods.go:61] "kube-scheduler-test-preload-961155" [7d9413ca-7e88-4ace-a23b-88b3247dcb75] Running
	I1210 00:50:24.530230  120345 system_pods.go:61] "storage-provisioner" [7553e1ab-e31b-4a36-aa66-ff137c4f6202] Running
	I1210 00:50:24.530237  120345 system_pods.go:74] duration metric: took 182.20443ms to wait for pod list to return data ...
	I1210 00:50:24.530247  120345 default_sa.go:34] waiting for default service account to be created ...
	I1210 00:50:24.727318  120345 default_sa.go:45] found service account: "default"
	I1210 00:50:24.727353  120345 default_sa.go:55] duration metric: took 197.096555ms for default service account to be created ...
	I1210 00:50:24.727366  120345 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 00:50:24.930110  120345 system_pods.go:86] 7 kube-system pods found
	I1210 00:50:24.930138  120345 system_pods.go:89] "coredns-6d4b75cb6d-7lfvr" [23cf24d3-e8ba-4d06-98f2-25bdb6b0936c] Running
	I1210 00:50:24.930143  120345 system_pods.go:89] "etcd-test-preload-961155" [df2cda45-4427-4eed-8b6a-c4d0d746fa62] Running
	I1210 00:50:24.930147  120345 system_pods.go:89] "kube-apiserver-test-preload-961155" [7e34e4ce-c5b6-4597-9c0a-f55edf37e565] Running
	I1210 00:50:24.930151  120345 system_pods.go:89] "kube-controller-manager-test-preload-961155" [eac29c76-3a59-4162-9426-30dc8083e22e] Running
	I1210 00:50:24.930154  120345 system_pods.go:89] "kube-proxy-ghvwm" [62269a5e-0178-4d9b-b9a2-3392247174df] Running
	I1210 00:50:24.930157  120345 system_pods.go:89] "kube-scheduler-test-preload-961155" [7d9413ca-7e88-4ace-a23b-88b3247dcb75] Running
	I1210 00:50:24.930159  120345 system_pods.go:89] "storage-provisioner" [7553e1ab-e31b-4a36-aa66-ff137c4f6202] Running
	I1210 00:50:24.930166  120345 system_pods.go:126] duration metric: took 202.792749ms to wait for k8s-apps to be running ...
	I1210 00:50:24.930172  120345 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 00:50:24.930217  120345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:50:24.944404  120345 system_svc.go:56] duration metric: took 14.224857ms WaitForService to wait for kubelet
	I1210 00:50:24.944429  120345 kubeadm.go:582] duration metric: took 9.099991952s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:50:24.944446  120345 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:50:25.127611  120345 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:50:25.127639  120345 node_conditions.go:123] node cpu capacity is 2
	I1210 00:50:25.127651  120345 node_conditions.go:105] duration metric: took 183.200773ms to run NodePressure ...
	I1210 00:50:25.127663  120345 start.go:241] waiting for startup goroutines ...
	I1210 00:50:25.127669  120345 start.go:246] waiting for cluster config update ...
	I1210 00:50:25.127681  120345 start.go:255] writing updated cluster config ...
	I1210 00:50:25.127993  120345 ssh_runner.go:195] Run: rm -f paused
	I1210 00:50:25.174682  120345 start.go:600] kubectl: 1.31.3, cluster: 1.24.4 (minor skew: 7)
	I1210 00:50:25.176479  120345 out.go:201] 
	W1210 00:50:25.177631  120345 out.go:270] ! /usr/local/bin/kubectl is version 1.31.3, which may have incompatibilities with Kubernetes 1.24.4.
	I1210 00:50:25.178782  120345 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1210 00:50:25.180170  120345 out.go:177] * Done! kubectl is now configured to use "test-preload-961155" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 00:50:26 test-preload-961155 crio[685]: time="2024-12-10 00:50:26.039337762Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733791826039319078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6bc2f706-568d-4c01-a6a6-0a3f0c61952d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:50:26 test-preload-961155 crio[685]: time="2024-12-10 00:50:26.039968203Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05cec631-aa37-492c-b732-94e802eb2f82 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:50:26 test-preload-961155 crio[685]: time="2024-12-10 00:50:26.040032336Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05cec631-aa37-492c-b732-94e802eb2f82 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:50:26 test-preload-961155 crio[685]: time="2024-12-10 00:50:26.040185359Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3ead67f578eb90818c59fa56630f8a444ff85fb1e3c3090aca32cc34b6b9d3d,PodSandboxId:700c27a856cb51762943e99aadb450bbbed431eb70b95e661d73cf8d56f33f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733791821522841949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-7lfvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23cf24d3-e8ba-4d06-98f2-25bdb6b0936c,},Annotations:map[string]string{io.kubernetes.container.hash: f051e265,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a78639710e4b8f076a89c75620ab78c6fbf5b8fb4a3328a67c2162c103e1a4,PodSandboxId:841d9baf7ab283915730d40ced63b6b9d2fa829d746c997db8b480be1b3d27b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733791814698137482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 7553e1ab-e31b-4a36-aa66-ff137c4f6202,},Annotations:map[string]string{io.kubernetes.container.hash: 7d5d0dea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768a580338c117230efed7e9b43c2394e94a366a3a17634a16f5fc82523d99ba,PodSandboxId:33f965b73dc3597d67d8a957e06cf850082da9fd8454f4d5445ca5dd828968e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733791814442666080,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ghvwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62
269a5e-0178-4d9b-b9a2-3392247174df,},Annotations:map[string]string{io.kubernetes.container.hash: f8248a51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9971c8bc43793e2f9b2eaa4c9c0b51322d966f3c56fd473cbaee710cb1d2b67,PodSandboxId:aeefd980ece5e5f3f0d3cd662e8affa5c5772274e775c946b8aab6f3ea406e40,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733791808847356929,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-961155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af0e626ade0a349524587dfc246fb19a,},Anno
tations:map[string]string{io.kubernetes.container.hash: 91e42a8d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5539942db7ad58aa9d177e91a7a7d6798999330a3bc0ab48bbdae83aa81ee53,PodSandboxId:f27c4ca9d1f727673b0db4176a0cc21e09dd27b1d734b5b52be89a697d6d6edf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733791808844167304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-961155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc986465cd994a19447efdefaa2c3c8,},Annotations:map
[string]string{io.kubernetes.container.hash: abccce37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16aa98b618676ff4166ddbd212fda9658968fb3cf3075cac61f90b46a71846bc,PodSandboxId:0ac88637f2532ff6789b04d9c8090d221999d8399e8ef83cdf95433832ddcc09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733791808789898258,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-961155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef41ce69ff8c58bb95d6862317c49e0e,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a4555915e36c1da85527f0eab846b49e724cb33c442b00eaea0cf4f31cf4e5e,PodSandboxId:7d33266832c8adb7d7c180ef7bb07b7dc22a952d0716e8602ac6b60ce61217cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733791808787059446,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-961155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 128672d4a674cce1e19ae999572c3b99,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=05cec631-aa37-492c-b732-94e802eb2f82 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:50:26 test-preload-961155 crio[685]: time="2024-12-10 00:50:26.074503903Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31e78deb-a55b-4d59-aa60-955d6024f535 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:50:26 test-preload-961155 crio[685]: time="2024-12-10 00:50:26.074569490Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31e78deb-a55b-4d59-aa60-955d6024f535 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:50:26 test-preload-961155 crio[685]: time="2024-12-10 00:50:26.075588582Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f3035de7-5503-4392-9869-f04d4432e2d9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:50:26 test-preload-961155 crio[685]: time="2024-12-10 00:50:26.076184397Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733791826076161864,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3035de7-5503-4392-9869-f04d4432e2d9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:50:26 test-preload-961155 crio[685]: time="2024-12-10 00:50:26.076680938Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a2a23c0-aeed-4998-b98d-749ef54be862 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:50:26 test-preload-961155 crio[685]: time="2024-12-10 00:50:26.076777108Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a2a23c0-aeed-4998-b98d-749ef54be862 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:50:26 test-preload-961155 crio[685]: time="2024-12-10 00:50:26.076957378Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3ead67f578eb90818c59fa56630f8a444ff85fb1e3c3090aca32cc34b6b9d3d,PodSandboxId:700c27a856cb51762943e99aadb450bbbed431eb70b95e661d73cf8d56f33f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733791821522841949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-7lfvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23cf24d3-e8ba-4d06-98f2-25bdb6b0936c,},Annotations:map[string]string{io.kubernetes.container.hash: f051e265,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a78639710e4b8f076a89c75620ab78c6fbf5b8fb4a3328a67c2162c103e1a4,PodSandboxId:841d9baf7ab283915730d40ced63b6b9d2fa829d746c997db8b480be1b3d27b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733791814698137482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 7553e1ab-e31b-4a36-aa66-ff137c4f6202,},Annotations:map[string]string{io.kubernetes.container.hash: 7d5d0dea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768a580338c117230efed7e9b43c2394e94a366a3a17634a16f5fc82523d99ba,PodSandboxId:33f965b73dc3597d67d8a957e06cf850082da9fd8454f4d5445ca5dd828968e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733791814442666080,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ghvwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62
269a5e-0178-4d9b-b9a2-3392247174df,},Annotations:map[string]string{io.kubernetes.container.hash: f8248a51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9971c8bc43793e2f9b2eaa4c9c0b51322d966f3c56fd473cbaee710cb1d2b67,PodSandboxId:aeefd980ece5e5f3f0d3cd662e8affa5c5772274e775c946b8aab6f3ea406e40,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733791808847356929,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-961155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af0e626ade0a349524587dfc246fb19a,},Anno
tations:map[string]string{io.kubernetes.container.hash: 91e42a8d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5539942db7ad58aa9d177e91a7a7d6798999330a3bc0ab48bbdae83aa81ee53,PodSandboxId:f27c4ca9d1f727673b0db4176a0cc21e09dd27b1d734b5b52be89a697d6d6edf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733791808844167304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-961155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc986465cd994a19447efdefaa2c3c8,},Annotations:map
[string]string{io.kubernetes.container.hash: abccce37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16aa98b618676ff4166ddbd212fda9658968fb3cf3075cac61f90b46a71846bc,PodSandboxId:0ac88637f2532ff6789b04d9c8090d221999d8399e8ef83cdf95433832ddcc09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733791808789898258,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-961155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef41ce69ff8c58bb95d6862317c49e0e,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a4555915e36c1da85527f0eab846b49e724cb33c442b00eaea0cf4f31cf4e5e,PodSandboxId:7d33266832c8adb7d7c180ef7bb07b7dc22a952d0716e8602ac6b60ce61217cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733791808787059446,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-961155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 128672d4a674cce1e19ae999572c3b99,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a2a23c0-aeed-4998-b98d-749ef54be862 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:50:26 test-preload-961155 crio[685]: time="2024-12-10 00:50:26.108997895Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=181e8fc9-1836-479a-a285-d071f47d93ef name=/runtime.v1.RuntimeService/Version
	Dec 10 00:50:26 test-preload-961155 crio[685]: time="2024-12-10 00:50:26.109066471Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=181e8fc9-1836-479a-a285-d071f47d93ef name=/runtime.v1.RuntimeService/Version
	Dec 10 00:50:26 test-preload-961155 crio[685]: time="2024-12-10 00:50:26.109853400Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3ab8a28-f637-492d-ba76-6db2e7bd134c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:50:26 test-preload-961155 crio[685]: time="2024-12-10 00:50:26.110254626Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733791826110237365,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3ab8a28-f637-492d-ba76-6db2e7bd134c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:50:26 test-preload-961155 crio[685]: time="2024-12-10 00:50:26.110740153Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0bd437c-6ba2-4dfb-9935-07e92408d6f6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:50:26 test-preload-961155 crio[685]: time="2024-12-10 00:50:26.110799364Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0bd437c-6ba2-4dfb-9935-07e92408d6f6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:50:26 test-preload-961155 crio[685]: time="2024-12-10 00:50:26.110955476Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3ead67f578eb90818c59fa56630f8a444ff85fb1e3c3090aca32cc34b6b9d3d,PodSandboxId:700c27a856cb51762943e99aadb450bbbed431eb70b95e661d73cf8d56f33f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733791821522841949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-7lfvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23cf24d3-e8ba-4d06-98f2-25bdb6b0936c,},Annotations:map[string]string{io.kubernetes.container.hash: f051e265,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a78639710e4b8f076a89c75620ab78c6fbf5b8fb4a3328a67c2162c103e1a4,PodSandboxId:841d9baf7ab283915730d40ced63b6b9d2fa829d746c997db8b480be1b3d27b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733791814698137482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 7553e1ab-e31b-4a36-aa66-ff137c4f6202,},Annotations:map[string]string{io.kubernetes.container.hash: 7d5d0dea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768a580338c117230efed7e9b43c2394e94a366a3a17634a16f5fc82523d99ba,PodSandboxId:33f965b73dc3597d67d8a957e06cf850082da9fd8454f4d5445ca5dd828968e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733791814442666080,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ghvwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62
269a5e-0178-4d9b-b9a2-3392247174df,},Annotations:map[string]string{io.kubernetes.container.hash: f8248a51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9971c8bc43793e2f9b2eaa4c9c0b51322d966f3c56fd473cbaee710cb1d2b67,PodSandboxId:aeefd980ece5e5f3f0d3cd662e8affa5c5772274e775c946b8aab6f3ea406e40,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733791808847356929,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-961155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af0e626ade0a349524587dfc246fb19a,},Anno
tations:map[string]string{io.kubernetes.container.hash: 91e42a8d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5539942db7ad58aa9d177e91a7a7d6798999330a3bc0ab48bbdae83aa81ee53,PodSandboxId:f27c4ca9d1f727673b0db4176a0cc21e09dd27b1d734b5b52be89a697d6d6edf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733791808844167304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-961155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc986465cd994a19447efdefaa2c3c8,},Annotations:map
[string]string{io.kubernetes.container.hash: abccce37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16aa98b618676ff4166ddbd212fda9658968fb3cf3075cac61f90b46a71846bc,PodSandboxId:0ac88637f2532ff6789b04d9c8090d221999d8399e8ef83cdf95433832ddcc09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733791808789898258,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-961155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef41ce69ff8c58bb95d6862317c49e0e,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a4555915e36c1da85527f0eab846b49e724cb33c442b00eaea0cf4f31cf4e5e,PodSandboxId:7d33266832c8adb7d7c180ef7bb07b7dc22a952d0716e8602ac6b60ce61217cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733791808787059446,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-961155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 128672d4a674cce1e19ae999572c3b99,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0bd437c-6ba2-4dfb-9935-07e92408d6f6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:50:26 test-preload-961155 crio[685]: time="2024-12-10 00:50:26.142035339Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3ca069c7-af4b-4302-8606-00760e6e12c8 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:50:26 test-preload-961155 crio[685]: time="2024-12-10 00:50:26.142103191Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3ca069c7-af4b-4302-8606-00760e6e12c8 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:50:26 test-preload-961155 crio[685]: time="2024-12-10 00:50:26.143220892Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c69c570e-7847-49e1-baa0-84c8c740a658 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:50:26 test-preload-961155 crio[685]: time="2024-12-10 00:50:26.143622950Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733791826143605763,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c69c570e-7847-49e1-baa0-84c8c740a658 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:50:26 test-preload-961155 crio[685]: time="2024-12-10 00:50:26.144183845Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a42608df-e81f-45a5-a0e0-a79ad2fa9d60 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:50:26 test-preload-961155 crio[685]: time="2024-12-10 00:50:26.144255562Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a42608df-e81f-45a5-a0e0-a79ad2fa9d60 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:50:26 test-preload-961155 crio[685]: time="2024-12-10 00:50:26.144408753Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b3ead67f578eb90818c59fa56630f8a444ff85fb1e3c3090aca32cc34b6b9d3d,PodSandboxId:700c27a856cb51762943e99aadb450bbbed431eb70b95e661d73cf8d56f33f14,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733791821522841949,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-7lfvr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23cf24d3-e8ba-4d06-98f2-25bdb6b0936c,},Annotations:map[string]string{io.kubernetes.container.hash: f051e265,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a78639710e4b8f076a89c75620ab78c6fbf5b8fb4a3328a67c2162c103e1a4,PodSandboxId:841d9baf7ab283915730d40ced63b6b9d2fa829d746c997db8b480be1b3d27b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733791814698137482,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 7553e1ab-e31b-4a36-aa66-ff137c4f6202,},Annotations:map[string]string{io.kubernetes.container.hash: 7d5d0dea,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768a580338c117230efed7e9b43c2394e94a366a3a17634a16f5fc82523d99ba,PodSandboxId:33f965b73dc3597d67d8a957e06cf850082da9fd8454f4d5445ca5dd828968e2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733791814442666080,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ghvwm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62
269a5e-0178-4d9b-b9a2-3392247174df,},Annotations:map[string]string{io.kubernetes.container.hash: f8248a51,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9971c8bc43793e2f9b2eaa4c9c0b51322d966f3c56fd473cbaee710cb1d2b67,PodSandboxId:aeefd980ece5e5f3f0d3cd662e8affa5c5772274e775c946b8aab6f3ea406e40,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733791808847356929,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-961155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af0e626ade0a349524587dfc246fb19a,},Anno
tations:map[string]string{io.kubernetes.container.hash: 91e42a8d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5539942db7ad58aa9d177e91a7a7d6798999330a3bc0ab48bbdae83aa81ee53,PodSandboxId:f27c4ca9d1f727673b0db4176a0cc21e09dd27b1d734b5b52be89a697d6d6edf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733791808844167304,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-961155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc986465cd994a19447efdefaa2c3c8,},Annotations:map
[string]string{io.kubernetes.container.hash: abccce37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16aa98b618676ff4166ddbd212fda9658968fb3cf3075cac61f90b46a71846bc,PodSandboxId:0ac88637f2532ff6789b04d9c8090d221999d8399e8ef83cdf95433832ddcc09,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733791808789898258,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-961155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef41ce69ff8c58bb95d6862317c49e0e,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a4555915e36c1da85527f0eab846b49e724cb33c442b00eaea0cf4f31cf4e5e,PodSandboxId:7d33266832c8adb7d7c180ef7bb07b7dc22a952d0716e8602ac6b60ce61217cc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733791808787059446,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-961155,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 128672d4a674cce1e19ae999572c3b99,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a42608df-e81f-45a5-a0e0-a79ad2fa9d60 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b3ead67f578eb       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   4 seconds ago       Running             coredns                   1                   700c27a856cb5       coredns-6d4b75cb6d-7lfvr
	a7a78639710e4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   11 seconds ago      Running             storage-provisioner       1                   841d9baf7ab28       storage-provisioner
	768a580338c11       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   11 seconds ago      Running             kube-proxy                1                   33f965b73dc35       kube-proxy-ghvwm
	d9971c8bc4379       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   17 seconds ago      Running             etcd                      1                   aeefd980ece5e       etcd-test-preload-961155
	d5539942db7ad       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   17 seconds ago      Running             kube-apiserver            1                   f27c4ca9d1f72       kube-apiserver-test-preload-961155
	16aa98b618676       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   17 seconds ago      Running             kube-controller-manager   1                   0ac88637f2532       kube-controller-manager-test-preload-961155
	0a4555915e36c       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   17 seconds ago      Running             kube-scheduler            1                   7d33266832c8a       kube-scheduler-test-preload-961155
	
	
	==> coredns [b3ead67f578eb90818c59fa56630f8a444ff85fb1e3c3090aca32cc34b6b9d3d] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:54084 - 55944 "HINFO IN 6269399059560021560.6955657689849549553. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021620092s
	
	
	==> describe nodes <==
	Name:               test-preload-961155
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-961155
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=test-preload-961155
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_10T00_49_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:48:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-961155
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:50:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 00:50:23 +0000   Tue, 10 Dec 2024 00:48:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 00:50:23 +0000   Tue, 10 Dec 2024 00:48:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 00:50:23 +0000   Tue, 10 Dec 2024 00:48:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 00:50:23 +0000   Tue, 10 Dec 2024 00:50:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.111
	  Hostname:    test-preload-961155
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 66ce55a0887c4e0c8f1a5d859506faac
	  System UUID:                66ce55a0-887c-4e0c-8f1a-5d859506faac
	  Boot ID:                    df8c35e3-ea07-4b1a-a32a-8825197a27be
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-7lfvr                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     73s
	  kube-system                 etcd-test-preload-961155                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         86s
	  kube-system                 kube-apiserver-test-preload-961155             250m (12%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-controller-manager-test-preload-961155    200m (10%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-proxy-ghvwm                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-scheduler-test-preload-961155             100m (5%)     0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11s                kube-proxy       
	  Normal  Starting                 72s                kube-proxy       
	  Normal  Starting                 86s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  86s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  86s                kubelet          Node test-preload-961155 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    86s                kubelet          Node test-preload-961155 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     86s                kubelet          Node test-preload-961155 status is now: NodeHasSufficientPID
	  Normal  NodeReady                76s                kubelet          Node test-preload-961155 status is now: NodeReady
	  Normal  RegisteredNode           74s                node-controller  Node test-preload-961155 event: Registered Node test-preload-961155 in Controller
	  Normal  Starting                 18s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18s (x8 over 18s)  kubelet          Node test-preload-961155 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s (x8 over 18s)  kubelet          Node test-preload-961155 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s (x7 over 18s)  kubelet          Node test-preload-961155 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node test-preload-961155 event: Registered Node test-preload-961155 in Controller
	
	
	==> dmesg <==
	[Dec10 00:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052353] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037352] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.785716] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.918903] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.534806] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.059465] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.051350] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.046815] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.150081] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.134421] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.239956] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[Dec10 00:50] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[  +0.058074] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.627457] systemd-fstab-generator[1133]: Ignoring "noauto" option for root device
	[  +6.492214] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.466584] systemd-fstab-generator[1769]: Ignoring "noauto" option for root device
	[  +5.453089] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [d9971c8bc43793e2f9b2eaa4c9c0b51322d966f3c56fd473cbaee710cb1d2b67] <==
	{"level":"info","ts":"2024-12-10T00:50:09.139Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"6ca692280bc5404a","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-12-10T00:50:09.139Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-12-10T00:50:09.146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6ca692280bc5404a switched to configuration voters=(7829105702924009546)"}
	{"level":"info","ts":"2024-12-10T00:50:09.147Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"38179560bbe6e25a","local-member-id":"6ca692280bc5404a","added-peer-id":"6ca692280bc5404a","added-peer-peer-urls":["https://192.168.39.111:2380"]}
	{"level":"info","ts":"2024-12-10T00:50:09.147Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"38179560bbe6e25a","local-member-id":"6ca692280bc5404a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T00:50:09.147Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T00:50:09.153Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-10T00:50:09.153Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.111:2380"}
	{"level":"info","ts":"2024-12-10T00:50:09.153Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.111:2380"}
	{"level":"info","ts":"2024-12-10T00:50:09.153Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-10T00:50:09.153Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6ca692280bc5404a","initial-advertise-peer-urls":["https://192.168.39.111:2380"],"listen-peer-urls":["https://192.168.39.111:2380"],"advertise-client-urls":["https://192.168.39.111:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.111:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-10T00:50:10.507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6ca692280bc5404a is starting a new election at term 2"}
	{"level":"info","ts":"2024-12-10T00:50:10.507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6ca692280bc5404a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-10T00:50:10.507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6ca692280bc5404a received MsgPreVoteResp from 6ca692280bc5404a at term 2"}
	{"level":"info","ts":"2024-12-10T00:50:10.507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6ca692280bc5404a became candidate at term 3"}
	{"level":"info","ts":"2024-12-10T00:50:10.507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6ca692280bc5404a received MsgVoteResp from 6ca692280bc5404a at term 3"}
	{"level":"info","ts":"2024-12-10T00:50:10.507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6ca692280bc5404a became leader at term 3"}
	{"level":"info","ts":"2024-12-10T00:50:10.507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6ca692280bc5404a elected leader 6ca692280bc5404a at term 3"}
	{"level":"info","ts":"2024-12-10T00:50:10.507Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"6ca692280bc5404a","local-member-attributes":"{Name:test-preload-961155 ClientURLs:[https://192.168.39.111:2379]}","request-path":"/0/members/6ca692280bc5404a/attributes","cluster-id":"38179560bbe6e25a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-10T00:50:10.507Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-10T00:50:10.509Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.111:2379"}
	{"level":"info","ts":"2024-12-10T00:50:10.509Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-10T00:50:10.510Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-10T00:50:10.511Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-10T00:50:10.511Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 00:50:26 up 0 min,  0 users,  load average: 0.32, 0.09, 0.03
	Linux test-preload-961155 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d5539942db7ad58aa9d177e91a7a7d6798999330a3bc0ab48bbdae83aa81ee53] <==
	I1210 00:50:12.797477       1 establishing_controller.go:76] Starting EstablishingController
	I1210 00:50:12.797709       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1210 00:50:12.797896       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1210 00:50:12.797943       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1210 00:50:12.822973       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1210 00:50:12.840651       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1210 00:50:12.875921       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1210 00:50:12.877173       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	E1210 00:50:12.888269       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1210 00:50:12.908475       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1210 00:50:12.961315       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1210 00:50:12.965129       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1210 00:50:12.966725       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1210 00:50:12.969098       1 cache.go:39] Caches are synced for autoregister controller
	I1210 00:50:12.970914       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1210 00:50:13.467505       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1210 00:50:13.773954       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 00:50:14.288745       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1210 00:50:14.305752       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1210 00:50:14.365267       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1210 00:50:14.380894       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 00:50:14.386546       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 00:50:14.733341       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1210 00:50:25.278015       1 controller.go:611] quota admission added evaluator for: endpoints
	I1210 00:50:25.315561       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [16aa98b618676ff4166ddbd212fda9658968fb3cf3075cac61f90b46a71846bc] <==
	I1210 00:50:25.260925       1 shared_informer.go:262] Caches are synced for endpoint
	I1210 00:50:25.262924       1 shared_informer.go:262] Caches are synced for TTL
	I1210 00:50:25.264622       1 shared_informer.go:262] Caches are synced for crt configmap
	I1210 00:50:25.266620       1 shared_informer.go:262] Caches are synced for ephemeral
	I1210 00:50:25.267226       1 shared_informer.go:262] Caches are synced for node
	I1210 00:50:25.267304       1 range_allocator.go:173] Starting range CIDR allocator
	I1210 00:50:25.267337       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1210 00:50:25.267368       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1210 00:50:25.268591       1 shared_informer.go:262] Caches are synced for service account
	I1210 00:50:25.270417       1 shared_informer.go:262] Caches are synced for expand
	I1210 00:50:25.272460       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1210 00:50:25.278335       1 shared_informer.go:262] Caches are synced for deployment
	I1210 00:50:25.278777       1 shared_informer.go:262] Caches are synced for GC
	I1210 00:50:25.288749       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1210 00:50:25.289374       1 shared_informer.go:262] Caches are synced for job
	I1210 00:50:25.290432       1 shared_informer.go:262] Caches are synced for attach detach
	I1210 00:50:25.291141       1 shared_informer.go:262] Caches are synced for cronjob
	I1210 00:50:25.297207       1 shared_informer.go:262] Caches are synced for persistent volume
	I1210 00:50:25.297621       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I1210 00:50:25.299329       1 shared_informer.go:262] Caches are synced for namespace
	I1210 00:50:25.423429       1 shared_informer.go:262] Caches are synced for resource quota
	I1210 00:50:25.461299       1 shared_informer.go:262] Caches are synced for resource quota
	I1210 00:50:25.910744       1 shared_informer.go:262] Caches are synced for garbage collector
	I1210 00:50:25.932022       1 shared_informer.go:262] Caches are synced for garbage collector
	I1210 00:50:25.932057       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [768a580338c117230efed7e9b43c2394e94a366a3a17634a16f5fc82523d99ba] <==
	I1210 00:50:14.656129       1 node.go:163] Successfully retrieved node IP: 192.168.39.111
	I1210 00:50:14.656200       1 server_others.go:138] "Detected node IP" address="192.168.39.111"
	I1210 00:50:14.656241       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1210 00:50:14.723255       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1210 00:50:14.723284       1 server_others.go:206] "Using iptables Proxier"
	I1210 00:50:14.723861       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1210 00:50:14.724822       1 server.go:661] "Version info" version="v1.24.4"
	I1210 00:50:14.724836       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 00:50:14.726975       1 config.go:317] "Starting service config controller"
	I1210 00:50:14.727397       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1210 00:50:14.727429       1 config.go:226] "Starting endpoint slice config controller"
	I1210 00:50:14.727434       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1210 00:50:14.728410       1 config.go:444] "Starting node config controller"
	I1210 00:50:14.728432       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1210 00:50:14.828404       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1210 00:50:14.828474       1 shared_informer.go:262] Caches are synced for node config
	I1210 00:50:14.828451       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [0a4555915e36c1da85527f0eab846b49e724cb33c442b00eaea0cf4f31cf4e5e] <==
	I1210 00:50:09.461063       1 serving.go:348] Generated self-signed cert in-memory
	W1210 00:50:12.825747       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1210 00:50:12.825886       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1210 00:50:12.825966       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1210 00:50:12.825990       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1210 00:50:12.889865       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1210 00:50:12.889948       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 00:50:12.894729       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1210 00:50:12.894904       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1210 00:50:12.895937       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1210 00:50:12.896061       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1210 00:50:12.995785       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 10 00:50:13 test-preload-961155 kubelet[1140]: I1210 00:50:13.160658    1140 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7553e1ab-e31b-4a36-aa66-ff137c4f6202-tmp\") pod \"storage-provisioner\" (UID: \"7553e1ab-e31b-4a36-aa66-ff137c4f6202\") " pod="kube-system/storage-provisioner"
	Dec 10 00:50:13 test-preload-961155 kubelet[1140]: I1210 00:50:13.160838    1140 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2sr8\" (UniqueName: \"kubernetes.io/projected/62269a5e-0178-4d9b-b9a2-3392247174df-kube-api-access-s2sr8\") pod \"kube-proxy-ghvwm\" (UID: \"62269a5e-0178-4d9b-b9a2-3392247174df\") " pod="kube-system/kube-proxy-ghvwm"
	Dec 10 00:50:13 test-preload-961155 kubelet[1140]: I1210 00:50:13.160976    1140 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v986p\" (UniqueName: \"kubernetes.io/projected/7553e1ab-e31b-4a36-aa66-ff137c4f6202-kube-api-access-v986p\") pod \"storage-provisioner\" (UID: \"7553e1ab-e31b-4a36-aa66-ff137c4f6202\") " pod="kube-system/storage-provisioner"
	Dec 10 00:50:13 test-preload-961155 kubelet[1140]: I1210 00:50:13.161094    1140 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23cf24d3-e8ba-4d06-98f2-25bdb6b0936c-config-volume\") pod \"coredns-6d4b75cb6d-7lfvr\" (UID: \"23cf24d3-e8ba-4d06-98f2-25bdb6b0936c\") " pod="kube-system/coredns-6d4b75cb6d-7lfvr"
	Dec 10 00:50:13 test-preload-961155 kubelet[1140]: I1210 00:50:13.161170    1140 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxtj4\" (UniqueName: \"kubernetes.io/projected/23cf24d3-e8ba-4d06-98f2-25bdb6b0936c-kube-api-access-jxtj4\") pod \"coredns-6d4b75cb6d-7lfvr\" (UID: \"23cf24d3-e8ba-4d06-98f2-25bdb6b0936c\") " pod="kube-system/coredns-6d4b75cb6d-7lfvr"
	Dec 10 00:50:13 test-preload-961155 kubelet[1140]: I1210 00:50:13.161265    1140 reconciler.go:159] "Reconciler: start to sync state"
	Dec 10 00:50:13 test-preload-961155 kubelet[1140]: I1210 00:50:13.590370    1140 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa22d4eb-3eed-4b48-8b1b-cabdb10a82b4-config-volume\") pod \"aa22d4eb-3eed-4b48-8b1b-cabdb10a82b4\" (UID: \"aa22d4eb-3eed-4b48-8b1b-cabdb10a82b4\") "
	Dec 10 00:50:13 test-preload-961155 kubelet[1140]: I1210 00:50:13.590451    1140 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g94sh\" (UniqueName: \"kubernetes.io/projected/aa22d4eb-3eed-4b48-8b1b-cabdb10a82b4-kube-api-access-g94sh\") pod \"aa22d4eb-3eed-4b48-8b1b-cabdb10a82b4\" (UID: \"aa22d4eb-3eed-4b48-8b1b-cabdb10a82b4\") "
	Dec 10 00:50:13 test-preload-961155 kubelet[1140]: E1210 00:50:13.590786    1140 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 10 00:50:13 test-preload-961155 kubelet[1140]: E1210 00:50:13.590966    1140 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/23cf24d3-e8ba-4d06-98f2-25bdb6b0936c-config-volume podName:23cf24d3-e8ba-4d06-98f2-25bdb6b0936c nodeName:}" failed. No retries permitted until 2024-12-10 00:50:14.090900541 +0000 UTC m=+6.114827188 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/23cf24d3-e8ba-4d06-98f2-25bdb6b0936c-config-volume") pod "coredns-6d4b75cb6d-7lfvr" (UID: "23cf24d3-e8ba-4d06-98f2-25bdb6b0936c") : object "kube-system"/"coredns" not registered
	Dec 10 00:50:13 test-preload-961155 kubelet[1140]: W1210 00:50:13.591963    1140 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/aa22d4eb-3eed-4b48-8b1b-cabdb10a82b4/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Dec 10 00:50:13 test-preload-961155 kubelet[1140]: W1210 00:50:13.592191    1140 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/aa22d4eb-3eed-4b48-8b1b-cabdb10a82b4/volumes/kubernetes.io~projected/kube-api-access-g94sh: clearQuota called, but quotas disabled
	Dec 10 00:50:13 test-preload-961155 kubelet[1140]: I1210 00:50:13.592356    1140 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa22d4eb-3eed-4b48-8b1b-cabdb10a82b4-kube-api-access-g94sh" (OuterVolumeSpecName: "kube-api-access-g94sh") pod "aa22d4eb-3eed-4b48-8b1b-cabdb10a82b4" (UID: "aa22d4eb-3eed-4b48-8b1b-cabdb10a82b4"). InnerVolumeSpecName "kube-api-access-g94sh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 10 00:50:13 test-preload-961155 kubelet[1140]: I1210 00:50:13.592506    1140 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa22d4eb-3eed-4b48-8b1b-cabdb10a82b4-config-volume" (OuterVolumeSpecName: "config-volume") pod "aa22d4eb-3eed-4b48-8b1b-cabdb10a82b4" (UID: "aa22d4eb-3eed-4b48-8b1b-cabdb10a82b4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Dec 10 00:50:13 test-preload-961155 kubelet[1140]: I1210 00:50:13.691248    1140 reconciler.go:384] "Volume detached for volume \"kube-api-access-g94sh\" (UniqueName: \"kubernetes.io/projected/aa22d4eb-3eed-4b48-8b1b-cabdb10a82b4-kube-api-access-g94sh\") on node \"test-preload-961155\" DevicePath \"\""
	Dec 10 00:50:13 test-preload-961155 kubelet[1140]: I1210 00:50:13.691289    1140 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa22d4eb-3eed-4b48-8b1b-cabdb10a82b4-config-volume\") on node \"test-preload-961155\" DevicePath \"\""
	Dec 10 00:50:14 test-preload-961155 kubelet[1140]: E1210 00:50:14.094272    1140 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 10 00:50:14 test-preload-961155 kubelet[1140]: E1210 00:50:14.094364    1140 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/23cf24d3-e8ba-4d06-98f2-25bdb6b0936c-config-volume podName:23cf24d3-e8ba-4d06-98f2-25bdb6b0936c nodeName:}" failed. No retries permitted until 2024-12-10 00:50:15.094340023 +0000 UTC m=+7.118266681 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/23cf24d3-e8ba-4d06-98f2-25bdb6b0936c-config-volume") pod "coredns-6d4b75cb6d-7lfvr" (UID: "23cf24d3-e8ba-4d06-98f2-25bdb6b0936c") : object "kube-system"/"coredns" not registered
	Dec 10 00:50:15 test-preload-961155 kubelet[1140]: E1210 00:50:15.102857    1140 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 10 00:50:15 test-preload-961155 kubelet[1140]: E1210 00:50:15.102932    1140 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/23cf24d3-e8ba-4d06-98f2-25bdb6b0936c-config-volume podName:23cf24d3-e8ba-4d06-98f2-25bdb6b0936c nodeName:}" failed. No retries permitted until 2024-12-10 00:50:17.102914002 +0000 UTC m=+9.126840660 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/23cf24d3-e8ba-4d06-98f2-25bdb6b0936c-config-volume") pod "coredns-6d4b75cb6d-7lfvr" (UID: "23cf24d3-e8ba-4d06-98f2-25bdb6b0936c") : object "kube-system"/"coredns" not registered
	Dec 10 00:50:15 test-preload-961155 kubelet[1140]: E1210 00:50:15.193134    1140 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-7lfvr" podUID=23cf24d3-e8ba-4d06-98f2-25bdb6b0936c
	Dec 10 00:50:16 test-preload-961155 kubelet[1140]: I1210 00:50:16.203315    1140 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=aa22d4eb-3eed-4b48-8b1b-cabdb10a82b4 path="/var/lib/kubelet/pods/aa22d4eb-3eed-4b48-8b1b-cabdb10a82b4/volumes"
	Dec 10 00:50:17 test-preload-961155 kubelet[1140]: E1210 00:50:17.120427    1140 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 10 00:50:17 test-preload-961155 kubelet[1140]: E1210 00:50:17.120510    1140 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/23cf24d3-e8ba-4d06-98f2-25bdb6b0936c-config-volume podName:23cf24d3-e8ba-4d06-98f2-25bdb6b0936c nodeName:}" failed. No retries permitted until 2024-12-10 00:50:21.120492825 +0000 UTC m=+13.144419484 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/23cf24d3-e8ba-4d06-98f2-25bdb6b0936c-config-volume") pod "coredns-6d4b75cb6d-7lfvr" (UID: "23cf24d3-e8ba-4d06-98f2-25bdb6b0936c") : object "kube-system"/"coredns" not registered
	Dec 10 00:50:17 test-preload-961155 kubelet[1140]: E1210 00:50:17.192286    1140 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-7lfvr" podUID=23cf24d3-e8ba-4d06-98f2-25bdb6b0936c
	
	
	==> storage-provisioner [a7a78639710e4b8f076a89c75620ab78c6fbf5b8fb4a3328a67c2162c103e1a4] <==
	I1210 00:50:14.792394       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-961155 -n test-preload-961155
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-961155 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-961155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-961155
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-961155: (1.0939032s)
--- FAIL: TestPreload (161.96s)

                                                
                                    
x
+
TestKubernetesUpgrade (391.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-481624 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-481624 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m52.609943341s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-481624] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-481624" primary control-plane node in "kubernetes-upgrade-481624" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 00:52:18.376979  121784 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:52:18.377068  121784 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:52:18.377073  121784 out.go:358] Setting ErrFile to fd 2...
	I1210 00:52:18.377077  121784 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:52:18.377370  121784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 00:52:18.378500  121784 out.go:352] Setting JSON to false
	I1210 00:52:18.379426  121784 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":9289,"bootTime":1733782649,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:52:18.379559  121784 start.go:139] virtualization: kvm guest
	I1210 00:52:18.381595  121784 out.go:177] * [kubernetes-upgrade-481624] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 00:52:18.383178  121784 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 00:52:18.383190  121784 notify.go:220] Checking for updates...
	I1210 00:52:18.385929  121784 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:52:18.388244  121784 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:52:18.389363  121784 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:52:18.390442  121784 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:52:18.392816  121784 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:52:18.394351  121784 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:52:18.433150  121784 out.go:177] * Using the kvm2 driver based on user configuration
	I1210 00:52:18.434539  121784 start.go:297] selected driver: kvm2
	I1210 00:52:18.434553  121784 start.go:901] validating driver "kvm2" against <nil>
	I1210 00:52:18.434585  121784 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:52:18.435491  121784 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:52:18.452385  121784 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 00:52:18.469573  121784 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1210 00:52:18.469615  121784 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1210 00:52:18.469877  121784 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1210 00:52:18.469906  121784 cni.go:84] Creating CNI manager for ""
	I1210 00:52:18.469966  121784 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:52:18.469978  121784 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 00:52:18.470048  121784 start.go:340] cluster config:
	{Name:kubernetes-upgrade-481624 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-481624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:52:18.470217  121784 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:52:18.472081  121784 out.go:177] * Starting "kubernetes-upgrade-481624" primary control-plane node in "kubernetes-upgrade-481624" cluster
	I1210 00:52:18.473206  121784 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1210 00:52:18.473255  121784 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1210 00:52:18.473273  121784 cache.go:56] Caching tarball of preloaded images
	I1210 00:52:18.473354  121784 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 00:52:18.473369  121784 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1210 00:52:18.473819  121784 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/config.json ...
	I1210 00:52:18.473854  121784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/config.json: {Name:mk8ccb6f740f7d5814e3a4b64df0620ac89f75f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:52:18.474016  121784 start.go:360] acquireMachinesLock for kubernetes-upgrade-481624: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:52:42.642848  121784 start.go:364] duration metric: took 24.168779546s to acquireMachinesLock for "kubernetes-upgrade-481624"
	I1210 00:52:42.642923  121784 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-481624 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-481624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:52:42.643023  121784 start.go:125] createHost starting for "" (driver="kvm2")
	I1210 00:52:42.645693  121784 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1210 00:52:42.645874  121784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:52:42.645918  121784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:52:42.661510  121784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44309
	I1210 00:52:42.661882  121784 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:52:42.662445  121784 main.go:141] libmachine: Using API Version  1
	I1210 00:52:42.662470  121784 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:52:42.662853  121784 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:52:42.663036  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetMachineName
	I1210 00:52:42.663191  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .DriverName
	I1210 00:52:42.663354  121784 start.go:159] libmachine.API.Create for "kubernetes-upgrade-481624" (driver="kvm2")
	I1210 00:52:42.663395  121784 client.go:168] LocalClient.Create starting
	I1210 00:52:42.663429  121784 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem
	I1210 00:52:42.663471  121784 main.go:141] libmachine: Decoding PEM data...
	I1210 00:52:42.663493  121784 main.go:141] libmachine: Parsing certificate...
	I1210 00:52:42.663574  121784 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem
	I1210 00:52:42.663598  121784 main.go:141] libmachine: Decoding PEM data...
	I1210 00:52:42.663617  121784 main.go:141] libmachine: Parsing certificate...
	I1210 00:52:42.663643  121784 main.go:141] libmachine: Running pre-create checks...
	I1210 00:52:42.663655  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .PreCreateCheck
	I1210 00:52:42.664026  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetConfigRaw
	I1210 00:52:42.664450  121784 main.go:141] libmachine: Creating machine...
	I1210 00:52:42.664467  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .Create
	I1210 00:52:42.664597  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Creating KVM machine...
	I1210 00:52:42.665731  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | found existing default KVM network
	I1210 00:52:42.666738  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | I1210 00:52:42.666593  122087 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:55:e3:d6} reservation:<nil>}
	I1210 00:52:42.667419  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | I1210 00:52:42.667323  122087 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002483e0}
	I1210 00:52:42.667444  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | created network xml: 
	I1210 00:52:42.667457  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | <network>
	I1210 00:52:42.667469  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG |   <name>mk-kubernetes-upgrade-481624</name>
	I1210 00:52:42.667510  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG |   <dns enable='no'/>
	I1210 00:52:42.667524  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG |   
	I1210 00:52:42.667565  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1210 00:52:42.667596  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG |     <dhcp>
	I1210 00:52:42.667661  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1210 00:52:42.667687  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG |     </dhcp>
	I1210 00:52:42.667703  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG |   </ip>
	I1210 00:52:42.667713  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG |   
	I1210 00:52:42.667733  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | </network>
	I1210 00:52:42.667743  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | 
	I1210 00:52:42.672472  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | trying to create private KVM network mk-kubernetes-upgrade-481624 192.168.50.0/24...
	I1210 00:52:42.741356  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | private KVM network mk-kubernetes-upgrade-481624 192.168.50.0/24 created
	I1210 00:52:42.741402  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | I1210 00:52:42.741328  122087 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:52:42.741442  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Setting up store path in /home/jenkins/minikube-integration/20062-79135/.minikube/machines/kubernetes-upgrade-481624 ...
	I1210 00:52:42.741470  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Building disk image from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1210 00:52:42.741486  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Downloading /home/jenkins/minikube-integration/20062-79135/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1210 00:52:43.012982  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | I1210 00:52:43.012831  122087 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/kubernetes-upgrade-481624/id_rsa...
	I1210 00:52:43.124202  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | I1210 00:52:43.124029  122087 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/kubernetes-upgrade-481624/kubernetes-upgrade-481624.rawdisk...
	I1210 00:52:43.124232  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | Writing magic tar header
	I1210 00:52:43.124256  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | Writing SSH key tar header
	I1210 00:52:43.124269  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | I1210 00:52:43.124155  122087 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/kubernetes-upgrade-481624 ...
	I1210 00:52:43.124290  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/kubernetes-upgrade-481624
	I1210 00:52:43.124305  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines
	I1210 00:52:43.124331  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/kubernetes-upgrade-481624 (perms=drwx------)
	I1210 00:52:43.124352  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:52:43.124368  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines (perms=drwxr-xr-x)
	I1210 00:52:43.124399  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube (perms=drwxr-xr-x)
	I1210 00:52:43.124417  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135 (perms=drwxrwxr-x)
	I1210 00:52:43.124431  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135
	I1210 00:52:43.124447  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1210 00:52:43.124464  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | Checking permissions on dir: /home/jenkins
	I1210 00:52:43.124478  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | Checking permissions on dir: /home
	I1210 00:52:43.124490  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | Skipping /home - not owner
	I1210 00:52:43.124503  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 00:52:43.124525  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 00:52:43.124540  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Creating domain...
	I1210 00:52:43.125576  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) define libvirt domain using xml: 
	I1210 00:52:43.125609  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) <domain type='kvm'>
	I1210 00:52:43.125622  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)   <name>kubernetes-upgrade-481624</name>
	I1210 00:52:43.125638  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)   <memory unit='MiB'>2200</memory>
	I1210 00:52:43.125648  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)   <vcpu>2</vcpu>
	I1210 00:52:43.125658  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)   <features>
	I1210 00:52:43.125681  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)     <acpi/>
	I1210 00:52:43.125698  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)     <apic/>
	I1210 00:52:43.125727  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)     <pae/>
	I1210 00:52:43.125742  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)     
	I1210 00:52:43.125752  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)   </features>
	I1210 00:52:43.125763  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)   <cpu mode='host-passthrough'>
	I1210 00:52:43.125771  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)   
	I1210 00:52:43.125781  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)   </cpu>
	I1210 00:52:43.125789  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)   <os>
	I1210 00:52:43.125798  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)     <type>hvm</type>
	I1210 00:52:43.125804  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)     <boot dev='cdrom'/>
	I1210 00:52:43.125814  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)     <boot dev='hd'/>
	I1210 00:52:43.125823  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)     <bootmenu enable='no'/>
	I1210 00:52:43.125841  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)   </os>
	I1210 00:52:43.125851  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)   <devices>
	I1210 00:52:43.125869  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)     <disk type='file' device='cdrom'>
	I1210 00:52:43.125892  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/kubernetes-upgrade-481624/boot2docker.iso'/>
	I1210 00:52:43.125907  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)       <target dev='hdc' bus='scsi'/>
	I1210 00:52:43.125915  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)       <readonly/>
	I1210 00:52:43.125930  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)     </disk>
	I1210 00:52:43.125941  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)     <disk type='file' device='disk'>
	I1210 00:52:43.125953  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1210 00:52:43.125973  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/kubernetes-upgrade-481624/kubernetes-upgrade-481624.rawdisk'/>
	I1210 00:52:43.126011  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)       <target dev='hda' bus='virtio'/>
	I1210 00:52:43.126038  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)     </disk>
	I1210 00:52:43.126054  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)     <interface type='network'>
	I1210 00:52:43.126067  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)       <source network='mk-kubernetes-upgrade-481624'/>
	I1210 00:52:43.126088  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)       <model type='virtio'/>
	I1210 00:52:43.126105  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)     </interface>
	I1210 00:52:43.126120  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)     <interface type='network'>
	I1210 00:52:43.126136  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)       <source network='default'/>
	I1210 00:52:43.126146  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)       <model type='virtio'/>
	I1210 00:52:43.126153  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)     </interface>
	I1210 00:52:43.126172  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)     <serial type='pty'>
	I1210 00:52:43.126187  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)       <target port='0'/>
	I1210 00:52:43.126200  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)     </serial>
	I1210 00:52:43.126211  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)     <console type='pty'>
	I1210 00:52:43.126224  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)       <target type='serial' port='0'/>
	I1210 00:52:43.126235  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)     </console>
	I1210 00:52:43.126246  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)     <rng model='virtio'>
	I1210 00:52:43.126261  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)       <backend model='random'>/dev/random</backend>
	I1210 00:52:43.126273  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)     </rng>
	I1210 00:52:43.126282  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)     
	I1210 00:52:43.126290  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)     
	I1210 00:52:43.126300  121784 main.go:141] libmachine: (kubernetes-upgrade-481624)   </devices>
	I1210 00:52:43.126308  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) </domain>
	I1210 00:52:43.126317  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) 
	I1210 00:52:43.132935  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:69:92:32 in network default
	I1210 00:52:43.133542  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Ensuring networks are active...
	I1210 00:52:43.133567  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:52:43.134307  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Ensuring network default is active
	I1210 00:52:43.134626  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Ensuring network mk-kubernetes-upgrade-481624 is active
	I1210 00:52:43.135230  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Getting domain xml...
	I1210 00:52:43.135927  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Creating domain...
	I1210 00:52:44.432684  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Waiting to get IP...
	I1210 00:52:44.433818  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:52:44.434291  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | unable to find current IP address of domain kubernetes-upgrade-481624 in network mk-kubernetes-upgrade-481624
	I1210 00:52:44.434321  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | I1210 00:52:44.434276  122087 retry.go:31] will retry after 193.967019ms: waiting for machine to come up
	I1210 00:52:44.630138  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:52:44.630683  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | unable to find current IP address of domain kubernetes-upgrade-481624 in network mk-kubernetes-upgrade-481624
	I1210 00:52:44.630717  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | I1210 00:52:44.630627  122087 retry.go:31] will retry after 384.670929ms: waiting for machine to come up
	I1210 00:52:45.017356  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:52:45.017865  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | unable to find current IP address of domain kubernetes-upgrade-481624 in network mk-kubernetes-upgrade-481624
	I1210 00:52:45.017886  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | I1210 00:52:45.017809  122087 retry.go:31] will retry after 365.984646ms: waiting for machine to come up
	I1210 00:52:45.385680  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:52:45.386242  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | unable to find current IP address of domain kubernetes-upgrade-481624 in network mk-kubernetes-upgrade-481624
	I1210 00:52:45.386264  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | I1210 00:52:45.386200  122087 retry.go:31] will retry after 404.454749ms: waiting for machine to come up
	I1210 00:52:45.792903  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:52:45.793434  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | unable to find current IP address of domain kubernetes-upgrade-481624 in network mk-kubernetes-upgrade-481624
	I1210 00:52:45.793469  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | I1210 00:52:45.793386  122087 retry.go:31] will retry after 693.581901ms: waiting for machine to come up
	I1210 00:52:46.488226  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:52:46.488686  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | unable to find current IP address of domain kubernetes-upgrade-481624 in network mk-kubernetes-upgrade-481624
	I1210 00:52:46.488720  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | I1210 00:52:46.488629  122087 retry.go:31] will retry after 880.455339ms: waiting for machine to come up
	I1210 00:52:47.370213  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:52:47.370598  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | unable to find current IP address of domain kubernetes-upgrade-481624 in network mk-kubernetes-upgrade-481624
	I1210 00:52:47.370634  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | I1210 00:52:47.370523  122087 retry.go:31] will retry after 1.074900345s: waiting for machine to come up
	I1210 00:52:48.446696  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:52:48.447118  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | unable to find current IP address of domain kubernetes-upgrade-481624 in network mk-kubernetes-upgrade-481624
	I1210 00:52:48.447141  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | I1210 00:52:48.447084  122087 retry.go:31] will retry after 982.079772ms: waiting for machine to come up
	I1210 00:52:49.431518  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:52:49.432031  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | unable to find current IP address of domain kubernetes-upgrade-481624 in network mk-kubernetes-upgrade-481624
	I1210 00:52:49.432062  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | I1210 00:52:49.431973  122087 retry.go:31] will retry after 1.274478684s: waiting for machine to come up
	I1210 00:52:50.707900  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:52:50.708324  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | unable to find current IP address of domain kubernetes-upgrade-481624 in network mk-kubernetes-upgrade-481624
	I1210 00:52:50.708348  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | I1210 00:52:50.708276  122087 retry.go:31] will retry after 2.230863422s: waiting for machine to come up
	I1210 00:52:52.940575  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:52:52.941044  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | unable to find current IP address of domain kubernetes-upgrade-481624 in network mk-kubernetes-upgrade-481624
	I1210 00:52:52.941076  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | I1210 00:52:52.940985  122087 retry.go:31] will retry after 2.361887861s: waiting for machine to come up
	I1210 00:52:55.305508  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:52:55.305924  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | unable to find current IP address of domain kubernetes-upgrade-481624 in network mk-kubernetes-upgrade-481624
	I1210 00:52:55.305964  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | I1210 00:52:55.305874  122087 retry.go:31] will retry after 3.587319635s: waiting for machine to come up
	I1210 00:52:58.895064  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:52:58.895528  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | unable to find current IP address of domain kubernetes-upgrade-481624 in network mk-kubernetes-upgrade-481624
	I1210 00:52:58.895557  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | I1210 00:52:58.895479  122087 retry.go:31] will retry after 4.141121159s: waiting for machine to come up
	I1210 00:53:03.038017  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:03.038500  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Found IP for machine: 192.168.50.207
	I1210 00:53:03.038519  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Reserving static IP address...
	I1210 00:53:03.038530  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has current primary IP address 192.168.50.207 and MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:03.038827  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-481624", mac: "52:54:00:76:36:d7", ip: "192.168.50.207"} in network mk-kubernetes-upgrade-481624
	I1210 00:53:03.112049  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | Getting to WaitForSSH function...
	I1210 00:53:03.112077  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Reserved static IP address: 192.168.50.207
	I1210 00:53:03.112092  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Waiting for SSH to be available...
	I1210 00:53:03.115201  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:03.115467  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:76:36:d7", ip: ""} in network mk-kubernetes-upgrade-481624
	I1210 00:53:03.115498  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | unable to find defined IP address of network mk-kubernetes-upgrade-481624 interface with MAC address 52:54:00:76:36:d7
	I1210 00:53:03.115685  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | Using SSH client type: external
	I1210 00:53:03.115717  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/kubernetes-upgrade-481624/id_rsa (-rw-------)
	I1210 00:53:03.115765  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/kubernetes-upgrade-481624/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:53:03.115780  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | About to run SSH command:
	I1210 00:53:03.115791  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | exit 0
	I1210 00:53:03.120926  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | SSH cmd err, output: exit status 255: 
	I1210 00:53:03.120949  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1210 00:53:03.120960  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | command : exit 0
	I1210 00:53:03.120977  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | err     : exit status 255
	I1210 00:53:03.120988  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | output  : 
	I1210 00:53:06.123127  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | Getting to WaitForSSH function...
	I1210 00:53:06.125434  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:06.125751  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:36:d7", ip: ""} in network mk-kubernetes-upgrade-481624: {Iface:virbr2 ExpiryTime:2024-12-10 01:52:57 +0000 UTC Type:0 Mac:52:54:00:76:36:d7 Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:kubernetes-upgrade-481624 Clientid:01:52:54:00:76:36:d7}
	I1210 00:53:06.125784  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined IP address 192.168.50.207 and MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:06.125900  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | Using SSH client type: external
	I1210 00:53:06.125922  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/kubernetes-upgrade-481624/id_rsa (-rw-------)
	I1210 00:53:06.125941  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.207 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/kubernetes-upgrade-481624/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:53:06.125964  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | About to run SSH command:
	I1210 00:53:06.125972  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | exit 0
	I1210 00:53:06.250760  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | SSH cmd err, output: <nil>: 
	I1210 00:53:06.251047  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) KVM machine creation complete!
	I1210 00:53:06.251356  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetConfigRaw
	I1210 00:53:06.251967  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .DriverName
	I1210 00:53:06.252147  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .DriverName
	I1210 00:53:06.252309  121784 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1210 00:53:06.252323  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetState
	I1210 00:53:06.253429  121784 main.go:141] libmachine: Detecting operating system of created instance...
	I1210 00:53:06.253443  121784 main.go:141] libmachine: Waiting for SSH to be available...
	I1210 00:53:06.253447  121784 main.go:141] libmachine: Getting to WaitForSSH function...
	I1210 00:53:06.253453  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHHostname
	I1210 00:53:06.255670  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:06.256068  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:36:d7", ip: ""} in network mk-kubernetes-upgrade-481624: {Iface:virbr2 ExpiryTime:2024-12-10 01:52:57 +0000 UTC Type:0 Mac:52:54:00:76:36:d7 Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:kubernetes-upgrade-481624 Clientid:01:52:54:00:76:36:d7}
	I1210 00:53:06.256101  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined IP address 192.168.50.207 and MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:06.256184  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHPort
	I1210 00:53:06.256342  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHKeyPath
	I1210 00:53:06.256471  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHKeyPath
	I1210 00:53:06.256612  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHUsername
	I1210 00:53:06.256759  121784 main.go:141] libmachine: Using SSH client type: native
	I1210 00:53:06.256996  121784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.207 22 <nil> <nil>}
	I1210 00:53:06.257008  121784 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1210 00:53:06.353309  121784 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:53:06.353332  121784 main.go:141] libmachine: Detecting the provisioner...
	I1210 00:53:06.353340  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHHostname
	I1210 00:53:06.356079  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:06.356421  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:36:d7", ip: ""} in network mk-kubernetes-upgrade-481624: {Iface:virbr2 ExpiryTime:2024-12-10 01:52:57 +0000 UTC Type:0 Mac:52:54:00:76:36:d7 Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:kubernetes-upgrade-481624 Clientid:01:52:54:00:76:36:d7}
	I1210 00:53:06.356462  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined IP address 192.168.50.207 and MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:06.356618  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHPort
	I1210 00:53:06.356808  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHKeyPath
	I1210 00:53:06.356982  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHKeyPath
	I1210 00:53:06.357082  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHUsername
	I1210 00:53:06.357225  121784 main.go:141] libmachine: Using SSH client type: native
	I1210 00:53:06.357391  121784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.207 22 <nil> <nil>}
	I1210 00:53:06.357403  121784 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1210 00:53:06.454488  121784 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1210 00:53:06.454611  121784 main.go:141] libmachine: found compatible host: buildroot
	I1210 00:53:06.454625  121784 main.go:141] libmachine: Provisioning with buildroot...
	I1210 00:53:06.454633  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetMachineName
	I1210 00:53:06.454878  121784 buildroot.go:166] provisioning hostname "kubernetes-upgrade-481624"
	I1210 00:53:06.454908  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetMachineName
	I1210 00:53:06.455089  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHHostname
	I1210 00:53:06.457607  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:06.457912  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:36:d7", ip: ""} in network mk-kubernetes-upgrade-481624: {Iface:virbr2 ExpiryTime:2024-12-10 01:52:57 +0000 UTC Type:0 Mac:52:54:00:76:36:d7 Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:kubernetes-upgrade-481624 Clientid:01:52:54:00:76:36:d7}
	I1210 00:53:06.457948  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined IP address 192.168.50.207 and MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:06.458054  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHPort
	I1210 00:53:06.458237  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHKeyPath
	I1210 00:53:06.458361  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHKeyPath
	I1210 00:53:06.458483  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHUsername
	I1210 00:53:06.458645  121784 main.go:141] libmachine: Using SSH client type: native
	I1210 00:53:06.458863  121784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.207 22 <nil> <nil>}
	I1210 00:53:06.458880  121784 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-481624 && echo "kubernetes-upgrade-481624" | sudo tee /etc/hostname
	I1210 00:53:06.566830  121784 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-481624
	
	I1210 00:53:06.566863  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHHostname
	I1210 00:53:06.569355  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:06.569745  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:36:d7", ip: ""} in network mk-kubernetes-upgrade-481624: {Iface:virbr2 ExpiryTime:2024-12-10 01:52:57 +0000 UTC Type:0 Mac:52:54:00:76:36:d7 Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:kubernetes-upgrade-481624 Clientid:01:52:54:00:76:36:d7}
	I1210 00:53:06.569777  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined IP address 192.168.50.207 and MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:06.569958  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHPort
	I1210 00:53:06.570175  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHKeyPath
	I1210 00:53:06.570376  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHKeyPath
	I1210 00:53:06.570538  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHUsername
	I1210 00:53:06.570762  121784 main.go:141] libmachine: Using SSH client type: native
	I1210 00:53:06.570980  121784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.207 22 <nil> <nil>}
	I1210 00:53:06.571009  121784 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-481624' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-481624/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-481624' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 00:53:06.673905  121784 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:53:06.673934  121784 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 00:53:06.673974  121784 buildroot.go:174] setting up certificates
	I1210 00:53:06.673989  121784 provision.go:84] configureAuth start
	I1210 00:53:06.673999  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetMachineName
	I1210 00:53:06.674263  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetIP
	I1210 00:53:06.676693  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:06.677018  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:36:d7", ip: ""} in network mk-kubernetes-upgrade-481624: {Iface:virbr2 ExpiryTime:2024-12-10 01:52:57 +0000 UTC Type:0 Mac:52:54:00:76:36:d7 Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:kubernetes-upgrade-481624 Clientid:01:52:54:00:76:36:d7}
	I1210 00:53:06.677041  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined IP address 192.168.50.207 and MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:06.677138  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHHostname
	I1210 00:53:06.679242  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:06.679605  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:36:d7", ip: ""} in network mk-kubernetes-upgrade-481624: {Iface:virbr2 ExpiryTime:2024-12-10 01:52:57 +0000 UTC Type:0 Mac:52:54:00:76:36:d7 Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:kubernetes-upgrade-481624 Clientid:01:52:54:00:76:36:d7}
	I1210 00:53:06.679644  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined IP address 192.168.50.207 and MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:06.679711  121784 provision.go:143] copyHostCerts
	I1210 00:53:06.679775  121784 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 00:53:06.679797  121784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:53:06.679868  121784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 00:53:06.679984  121784 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 00:53:06.679994  121784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:53:06.680021  121784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 00:53:06.680109  121784 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 00:53:06.680121  121784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:53:06.680159  121784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 00:53:06.680257  121784 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-481624 san=[127.0.0.1 192.168.50.207 kubernetes-upgrade-481624 localhost minikube]
	I1210 00:53:06.854356  121784 provision.go:177] copyRemoteCerts
	I1210 00:53:06.854421  121784 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 00:53:06.854448  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHHostname
	I1210 00:53:06.857210  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:06.857556  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:36:d7", ip: ""} in network mk-kubernetes-upgrade-481624: {Iface:virbr2 ExpiryTime:2024-12-10 01:52:57 +0000 UTC Type:0 Mac:52:54:00:76:36:d7 Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:kubernetes-upgrade-481624 Clientid:01:52:54:00:76:36:d7}
	I1210 00:53:06.857588  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined IP address 192.168.50.207 and MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:06.857705  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHPort
	I1210 00:53:06.857889  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHKeyPath
	I1210 00:53:06.858033  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHUsername
	I1210 00:53:06.858151  121784 sshutil.go:53] new ssh client: &{IP:192.168.50.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/kubernetes-upgrade-481624/id_rsa Username:docker}
	I1210 00:53:06.936323  121784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 00:53:06.958043  121784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1210 00:53:06.979695  121784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 00:53:07.000504  121784 provision.go:87] duration metric: took 326.501544ms to configureAuth
	I1210 00:53:07.000531  121784 buildroot.go:189] setting minikube options for container-runtime
	I1210 00:53:07.000688  121784 config.go:182] Loaded profile config "kubernetes-upgrade-481624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1210 00:53:07.000759  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHHostname
	I1210 00:53:07.003273  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:07.003631  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:36:d7", ip: ""} in network mk-kubernetes-upgrade-481624: {Iface:virbr2 ExpiryTime:2024-12-10 01:52:57 +0000 UTC Type:0 Mac:52:54:00:76:36:d7 Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:kubernetes-upgrade-481624 Clientid:01:52:54:00:76:36:d7}
	I1210 00:53:07.003661  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined IP address 192.168.50.207 and MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:07.003846  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHPort
	I1210 00:53:07.004017  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHKeyPath
	I1210 00:53:07.004207  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHKeyPath
	I1210 00:53:07.004308  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHUsername
	I1210 00:53:07.004428  121784 main.go:141] libmachine: Using SSH client type: native
	I1210 00:53:07.004627  121784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.207 22 <nil> <nil>}
	I1210 00:53:07.004645  121784 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 00:53:07.210477  121784 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 00:53:07.210528  121784 main.go:141] libmachine: Checking connection to Docker...
	I1210 00:53:07.210537  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetURL
	I1210 00:53:07.211923  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | Using libvirt version 6000000
	I1210 00:53:07.214178  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:07.214504  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:36:d7", ip: ""} in network mk-kubernetes-upgrade-481624: {Iface:virbr2 ExpiryTime:2024-12-10 01:52:57 +0000 UTC Type:0 Mac:52:54:00:76:36:d7 Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:kubernetes-upgrade-481624 Clientid:01:52:54:00:76:36:d7}
	I1210 00:53:07.214537  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined IP address 192.168.50.207 and MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:07.214734  121784 main.go:141] libmachine: Docker is up and running!
	I1210 00:53:07.214746  121784 main.go:141] libmachine: Reticulating splines...
	I1210 00:53:07.214753  121784 client.go:171] duration metric: took 24.551347748s to LocalClient.Create
	I1210 00:53:07.214774  121784 start.go:167] duration metric: took 24.55142238s to libmachine.API.Create "kubernetes-upgrade-481624"
	I1210 00:53:07.214784  121784 start.go:293] postStartSetup for "kubernetes-upgrade-481624" (driver="kvm2")
	I1210 00:53:07.214794  121784 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 00:53:07.214810  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .DriverName
	I1210 00:53:07.215043  121784 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 00:53:07.215073  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHHostname
	I1210 00:53:07.217239  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:07.217526  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:36:d7", ip: ""} in network mk-kubernetes-upgrade-481624: {Iface:virbr2 ExpiryTime:2024-12-10 01:52:57 +0000 UTC Type:0 Mac:52:54:00:76:36:d7 Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:kubernetes-upgrade-481624 Clientid:01:52:54:00:76:36:d7}
	I1210 00:53:07.217554  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined IP address 192.168.50.207 and MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:07.217668  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHPort
	I1210 00:53:07.217836  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHKeyPath
	I1210 00:53:07.217973  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHUsername
	I1210 00:53:07.218106  121784 sshutil.go:53] new ssh client: &{IP:192.168.50.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/kubernetes-upgrade-481624/id_rsa Username:docker}
	I1210 00:53:07.299706  121784 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 00:53:07.303389  121784 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 00:53:07.303410  121784 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 00:53:07.303472  121784 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 00:53:07.303568  121784 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 00:53:07.303696  121784 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 00:53:07.313635  121784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:53:07.336883  121784 start.go:296] duration metric: took 122.085978ms for postStartSetup
	I1210 00:53:07.336935  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetConfigRaw
	I1210 00:53:07.337541  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetIP
	I1210 00:53:07.340096  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:07.340436  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:36:d7", ip: ""} in network mk-kubernetes-upgrade-481624: {Iface:virbr2 ExpiryTime:2024-12-10 01:52:57 +0000 UTC Type:0 Mac:52:54:00:76:36:d7 Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:kubernetes-upgrade-481624 Clientid:01:52:54:00:76:36:d7}
	I1210 00:53:07.340461  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined IP address 192.168.50.207 and MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:07.340722  121784 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/config.json ...
	I1210 00:53:07.340889  121784 start.go:128] duration metric: took 24.697847404s to createHost
	I1210 00:53:07.340913  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHHostname
	I1210 00:53:07.343126  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:07.343424  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:36:d7", ip: ""} in network mk-kubernetes-upgrade-481624: {Iface:virbr2 ExpiryTime:2024-12-10 01:52:57 +0000 UTC Type:0 Mac:52:54:00:76:36:d7 Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:kubernetes-upgrade-481624 Clientid:01:52:54:00:76:36:d7}
	I1210 00:53:07.343489  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined IP address 192.168.50.207 and MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:07.343614  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHPort
	I1210 00:53:07.343778  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHKeyPath
	I1210 00:53:07.343906  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHKeyPath
	I1210 00:53:07.343997  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHUsername
	I1210 00:53:07.344114  121784 main.go:141] libmachine: Using SSH client type: native
	I1210 00:53:07.344286  121784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.207 22 <nil> <nil>}
	I1210 00:53:07.344300  121784 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 00:53:07.443278  121784 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733791987.420075268
	
	I1210 00:53:07.443307  121784 fix.go:216] guest clock: 1733791987.420075268
	I1210 00:53:07.443317  121784 fix.go:229] Guest: 2024-12-10 00:53:07.420075268 +0000 UTC Remote: 2024-12-10 00:53:07.340901265 +0000 UTC m=+49.016428225 (delta=79.174003ms)
	I1210 00:53:07.443359  121784 fix.go:200] guest clock delta is within tolerance: 79.174003ms
	I1210 00:53:07.443369  121784 start.go:83] releasing machines lock for "kubernetes-upgrade-481624", held for 24.800489658s
	I1210 00:53:07.443411  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .DriverName
	I1210 00:53:07.443720  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetIP
	I1210 00:53:07.446945  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:07.447389  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:36:d7", ip: ""} in network mk-kubernetes-upgrade-481624: {Iface:virbr2 ExpiryTime:2024-12-10 01:52:57 +0000 UTC Type:0 Mac:52:54:00:76:36:d7 Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:kubernetes-upgrade-481624 Clientid:01:52:54:00:76:36:d7}
	I1210 00:53:07.447419  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined IP address 192.168.50.207 and MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:07.447583  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .DriverName
	I1210 00:53:07.448092  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .DriverName
	I1210 00:53:07.448338  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .DriverName
	I1210 00:53:07.448451  121784 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 00:53:07.448495  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHHostname
	I1210 00:53:07.448589  121784 ssh_runner.go:195] Run: cat /version.json
	I1210 00:53:07.448658  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHHostname
	I1210 00:53:07.451364  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:07.451525  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:07.451747  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:36:d7", ip: ""} in network mk-kubernetes-upgrade-481624: {Iface:virbr2 ExpiryTime:2024-12-10 01:52:57 +0000 UTC Type:0 Mac:52:54:00:76:36:d7 Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:kubernetes-upgrade-481624 Clientid:01:52:54:00:76:36:d7}
	I1210 00:53:07.451788  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined IP address 192.168.50.207 and MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:07.451927  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHPort
	I1210 00:53:07.451930  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:36:d7", ip: ""} in network mk-kubernetes-upgrade-481624: {Iface:virbr2 ExpiryTime:2024-12-10 01:52:57 +0000 UTC Type:0 Mac:52:54:00:76:36:d7 Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:kubernetes-upgrade-481624 Clientid:01:52:54:00:76:36:d7}
	I1210 00:53:07.451960  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined IP address 192.168.50.207 and MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:07.452086  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHKeyPath
	I1210 00:53:07.452144  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHPort
	I1210 00:53:07.452231  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHUsername
	I1210 00:53:07.452325  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHKeyPath
	I1210 00:53:07.452396  121784 sshutil.go:53] new ssh client: &{IP:192.168.50.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/kubernetes-upgrade-481624/id_rsa Username:docker}
	I1210 00:53:07.452459  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHUsername
	I1210 00:53:07.452580  121784 sshutil.go:53] new ssh client: &{IP:192.168.50.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/kubernetes-upgrade-481624/id_rsa Username:docker}
	I1210 00:53:07.547336  121784 ssh_runner.go:195] Run: systemctl --version
	I1210 00:53:07.553567  121784 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 00:53:07.716289  121784 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 00:53:07.724456  121784 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 00:53:07.724522  121784 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 00:53:07.746640  121784 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 00:53:07.746664  121784 start.go:495] detecting cgroup driver to use...
	I1210 00:53:07.746730  121784 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 00:53:07.761811  121784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 00:53:07.779466  121784 docker.go:217] disabling cri-docker service (if available) ...
	I1210 00:53:07.779543  121784 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 00:53:07.795696  121784 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 00:53:07.808500  121784 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 00:53:07.940650  121784 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 00:53:08.099630  121784 docker.go:233] disabling docker service ...
	I1210 00:53:08.099705  121784 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 00:53:08.113182  121784 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 00:53:08.126217  121784 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 00:53:08.242679  121784 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 00:53:08.359849  121784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 00:53:08.375179  121784 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 00:53:08.394773  121784 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1210 00:53:08.394835  121784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:53:08.405350  121784 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 00:53:08.405429  121784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:53:08.414991  121784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:53:08.424225  121784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:53:08.434689  121784 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 00:53:08.444682  121784 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 00:53:08.453519  121784 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 00:53:08.453558  121784 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 00:53:08.466009  121784 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 00:53:08.474936  121784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:53:08.586283  121784 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 00:53:08.683927  121784 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 00:53:08.684024  121784 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 00:53:08.688690  121784 start.go:563] Will wait 60s for crictl version
	I1210 00:53:08.688742  121784 ssh_runner.go:195] Run: which crictl
	I1210 00:53:08.692279  121784 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:53:08.730112  121784 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 00:53:08.730229  121784 ssh_runner.go:195] Run: crio --version
	I1210 00:53:08.760269  121784 ssh_runner.go:195] Run: crio --version
	I1210 00:53:08.792547  121784 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1210 00:53:08.793820  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetIP
	I1210 00:53:08.796993  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:08.797380  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:36:d7", ip: ""} in network mk-kubernetes-upgrade-481624: {Iface:virbr2 ExpiryTime:2024-12-10 01:52:57 +0000 UTC Type:0 Mac:52:54:00:76:36:d7 Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:kubernetes-upgrade-481624 Clientid:01:52:54:00:76:36:d7}
	I1210 00:53:08.797414  121784 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined IP address 192.168.50.207 and MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:53:08.797667  121784 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1210 00:53:08.801591  121784 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:53:08.813378  121784 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-481624 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-481624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 00:53:08.813499  121784 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1210 00:53:08.813561  121784 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:53:08.842974  121784 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1210 00:53:08.843037  121784 ssh_runner.go:195] Run: which lz4
	I1210 00:53:08.846754  121784 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 00:53:08.850702  121784 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 00:53:08.850726  121784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1210 00:53:10.277499  121784 crio.go:462] duration metric: took 1.430765387s to copy over tarball
	I1210 00:53:10.277593  121784 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 00:53:12.896149  121784 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.618513171s)
	I1210 00:53:12.896190  121784 crio.go:469] duration metric: took 2.618651456s to extract the tarball
	I1210 00:53:12.896202  121784 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 00:53:12.939219  121784 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:53:12.987078  121784 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1210 00:53:12.987102  121784 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 00:53:12.987181  121784 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:53:12.987219  121784 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 00:53:12.987253  121784 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1210 00:53:12.987268  121784 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1210 00:53:12.987273  121784 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 00:53:12.987300  121784 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1210 00:53:12.987221  121784 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 00:53:12.987224  121784 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 00:53:12.989117  121784 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 00:53:12.989129  121784 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1210 00:53:12.989140  121784 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1210 00:53:12.989146  121784 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:53:12.989119  121784 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 00:53:12.989115  121784 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 00:53:12.989117  121784 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 00:53:12.989124  121784 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1210 00:53:13.135395  121784 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1210 00:53:13.139750  121784 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1210 00:53:13.141710  121784 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1210 00:53:13.151156  121784 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 00:53:13.152531  121784 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1210 00:53:13.161326  121784 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1210 00:53:13.174666  121784 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1210 00:53:13.223956  121784 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1210 00:53:13.224011  121784 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 00:53:13.224067  121784 ssh_runner.go:195] Run: which crictl
	I1210 00:53:13.275290  121784 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1210 00:53:13.275346  121784 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1210 00:53:13.275370  121784 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1210 00:53:13.275391  121784 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1210 00:53:13.275410  121784 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1210 00:53:13.275403  121784 ssh_runner.go:195] Run: which crictl
	I1210 00:53:13.275431  121784 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 00:53:13.275452  121784 ssh_runner.go:195] Run: which crictl
	I1210 00:53:13.275478  121784 ssh_runner.go:195] Run: which crictl
	I1210 00:53:13.309584  121784 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1210 00:53:13.309630  121784 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 00:53:13.309686  121784 ssh_runner.go:195] Run: which crictl
	I1210 00:53:13.314897  121784 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1210 00:53:13.314937  121784 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 00:53:13.314975  121784 ssh_runner.go:195] Run: which crictl
	I1210 00:53:13.316838  121784 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1210 00:53:13.316875  121784 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1210 00:53:13.316896  121784 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 00:53:13.316915  121784 ssh_runner.go:195] Run: which crictl
	I1210 00:53:13.316933  121784 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 00:53:13.316896  121784 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 00:53:13.316996  121784 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 00:53:13.319093  121784 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 00:53:13.320187  121784 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 00:53:13.422261  121784 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 00:53:13.422357  121784 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 00:53:13.436572  121784 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 00:53:13.436637  121784 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 00:53:13.436673  121784 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 00:53:13.446262  121784 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 00:53:13.450744  121784 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 00:53:13.501404  121784 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 00:53:13.507318  121784 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 00:53:13.611142  121784 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 00:53:13.611183  121784 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 00:53:13.611183  121784 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 00:53:13.611238  121784 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 00:53:13.616838  121784 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 00:53:13.616880  121784 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 00:53:13.628778  121784 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1210 00:53:13.738929  121784 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1210 00:53:13.739034  121784 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1210 00:53:13.742000  121784 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1210 00:53:13.742023  121784 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1210 00:53:13.747126  121784 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1210 00:53:13.747276  121784 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1210 00:53:13.918849  121784 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:53:14.059769  121784 cache_images.go:92] duration metric: took 1.072646756s to LoadCachedImages
	W1210 00:53:14.059881  121784 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1210 00:53:14.059900  121784 kubeadm.go:934] updating node { 192.168.50.207 8443 v1.20.0 crio true true} ...
	I1210 00:53:14.060054  121784 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-481624 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-481624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:53:14.060149  121784 ssh_runner.go:195] Run: crio config
	I1210 00:53:14.104009  121784 cni.go:84] Creating CNI manager for ""
	I1210 00:53:14.104039  121784 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:53:14.104051  121784 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 00:53:14.104085  121784 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.207 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-481624 NodeName:kubernetes-upgrade-481624 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1210 00:53:14.104322  121784 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-481624"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 00:53:14.104400  121784 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1210 00:53:14.114052  121784 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 00:53:14.114124  121784 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 00:53:14.123120  121784 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I1210 00:53:14.138858  121784 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:53:14.153996  121784 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1210 00:53:14.169016  121784 ssh_runner.go:195] Run: grep 192.168.50.207	control-plane.minikube.internal$ /etc/hosts
	I1210 00:53:14.172983  121784 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.207	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:53:14.184023  121784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:53:14.306803  121784 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:53:14.330030  121784 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624 for IP: 192.168.50.207
	I1210 00:53:14.330056  121784 certs.go:194] generating shared ca certs ...
	I1210 00:53:14.330080  121784 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:53:14.330271  121784 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 00:53:14.330330  121784 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 00:53:14.330342  121784 certs.go:256] generating profile certs ...
	I1210 00:53:14.330422  121784 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/client.key
	I1210 00:53:14.330441  121784 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/client.crt with IP's: []
	I1210 00:53:14.737954  121784 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/client.crt ...
	I1210 00:53:14.737985  121784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/client.crt: {Name:mk7265adfec5d3927279b0d0b34ec44d29ac31e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:53:14.738160  121784 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/client.key ...
	I1210 00:53:14.738178  121784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/client.key: {Name:mk0693b8e6e87cf7e75d989ba080119b8b131661 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:53:14.738265  121784 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/apiserver.key.fa416d62
	I1210 00:53:14.738282  121784 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/apiserver.crt.fa416d62 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.207]
	I1210 00:53:15.083434  121784 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/apiserver.crt.fa416d62 ...
	I1210 00:53:15.083462  121784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/apiserver.crt.fa416d62: {Name:mk6e864296cddd4317e0dce826e665d2abfec72b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:53:15.083613  121784 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/apiserver.key.fa416d62 ...
	I1210 00:53:15.083627  121784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/apiserver.key.fa416d62: {Name:mk892d1976d217a31e4fc7c53af3219172d736e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:53:15.083695  121784 certs.go:381] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/apiserver.crt.fa416d62 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/apiserver.crt
	I1210 00:53:15.083763  121784 certs.go:385] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/apiserver.key.fa416d62 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/apiserver.key
	I1210 00:53:15.083817  121784 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/proxy-client.key
	I1210 00:53:15.083832  121784 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/proxy-client.crt with IP's: []
	I1210 00:53:15.230825  121784 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/proxy-client.crt ...
	I1210 00:53:15.230854  121784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/proxy-client.crt: {Name:mk9dc86a8429f37d8bf8c04a695094528352b796 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:53:15.231025  121784 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/proxy-client.key ...
	I1210 00:53:15.231041  121784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/proxy-client.key: {Name:mk32def91474343ef7645f89c901c1d53ba25877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:53:15.231242  121784 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 00:53:15.231282  121784 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 00:53:15.231293  121784 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:53:15.231316  121784 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 00:53:15.231339  121784 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:53:15.231360  121784 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 00:53:15.231401  121784 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:53:15.232003  121784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:53:15.270532  121784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 00:53:15.300944  121784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:53:15.324655  121784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:53:15.348388  121784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1210 00:53:15.370863  121784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 00:53:15.393054  121784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:53:15.415864  121784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 00:53:15.438814  121784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:53:15.461274  121784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 00:53:15.482680  121784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 00:53:15.505176  121784 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 00:53:15.520599  121784 ssh_runner.go:195] Run: openssl version
	I1210 00:53:15.526054  121784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 00:53:15.535764  121784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 00:53:15.541505  121784 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 00:53:15.541560  121784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 00:53:15.549014  121784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 00:53:15.563583  121784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 00:53:15.574657  121784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 00:53:15.580263  121784 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 00:53:15.580304  121784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 00:53:15.587672  121784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:53:15.597809  121784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:53:15.607709  121784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:53:15.611662  121784 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:53:15.611703  121784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:53:15.616949  121784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:53:15.626531  121784 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:53:15.630264  121784 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 00:53:15.630325  121784 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-481624 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-481624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.207 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:53:15.630424  121784 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 00:53:15.630490  121784 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:53:15.670253  121784 cri.go:89] found id: ""
	I1210 00:53:15.670337  121784 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 00:53:15.685319  121784 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:53:15.694047  121784 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:53:15.702733  121784 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:53:15.702754  121784 kubeadm.go:157] found existing configuration files:
	
	I1210 00:53:15.702794  121784 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:53:15.711035  121784 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:53:15.711094  121784 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:53:15.719410  121784 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:53:15.727471  121784 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:53:15.727520  121784 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:53:15.736418  121784 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:53:15.745502  121784 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:53:15.745552  121784 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:53:15.754024  121784 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:53:15.762482  121784 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:53:15.762527  121784 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:53:15.771557  121784 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:53:16.041880  121784 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:55:13.481274  121784 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 00:55:13.481395  121784 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1210 00:55:13.483102  121784 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 00:55:13.483178  121784 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:55:13.483282  121784 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:55:13.483445  121784 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:55:13.483592  121784 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 00:55:13.483674  121784 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:55:13.485357  121784 out.go:235]   - Generating certificates and keys ...
	I1210 00:55:13.485471  121784 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:55:13.485553  121784 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:55:13.485648  121784 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 00:55:13.485753  121784 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1210 00:55:13.485849  121784 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1210 00:55:13.485919  121784 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1210 00:55:13.486007  121784 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1210 00:55:13.486232  121784 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-481624 localhost] and IPs [192.168.50.207 127.0.0.1 ::1]
	I1210 00:55:13.486323  121784 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1210 00:55:13.486537  121784 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-481624 localhost] and IPs [192.168.50.207 127.0.0.1 ::1]
	I1210 00:55:13.486642  121784 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 00:55:13.486740  121784 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 00:55:13.486805  121784 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1210 00:55:13.486881  121784 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:55:13.486967  121784 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:55:13.487047  121784 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:55:13.487156  121784 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:55:13.487242  121784 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:55:13.487379  121784 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:55:13.487520  121784 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:55:13.487582  121784 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:55:13.487674  121784 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:55:13.489301  121784 out.go:235]   - Booting up control plane ...
	I1210 00:55:13.489411  121784 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:55:13.489552  121784 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:55:13.489656  121784 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:55:13.489766  121784 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:55:13.489993  121784 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 00:55:13.490060  121784 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 00:55:13.490149  121784 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:55:13.490364  121784 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:55:13.490464  121784 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:55:13.490719  121784 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:55:13.490785  121784 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:55:13.490959  121784 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:55:13.491021  121784 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:55:13.491228  121784 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:55:13.491306  121784 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:55:13.491476  121784 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:55:13.491481  121784 kubeadm.go:310] 
	I1210 00:55:13.491515  121784 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 00:55:13.491563  121784 kubeadm.go:310] 		timed out waiting for the condition
	I1210 00:55:13.491581  121784 kubeadm.go:310] 
	I1210 00:55:13.491628  121784 kubeadm.go:310] 	This error is likely caused by:
	I1210 00:55:13.491663  121784 kubeadm.go:310] 		- The kubelet is not running
	I1210 00:55:13.491789  121784 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 00:55:13.491801  121784 kubeadm.go:310] 
	I1210 00:55:13.491935  121784 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 00:55:13.491965  121784 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 00:55:13.492015  121784 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 00:55:13.492037  121784 kubeadm.go:310] 
	I1210 00:55:13.492202  121784 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 00:55:13.492309  121784 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 00:55:13.492321  121784 kubeadm.go:310] 
	I1210 00:55:13.492451  121784 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 00:55:13.492559  121784 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 00:55:13.492676  121784 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 00:55:13.492779  121784 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 00:55:13.492839  121784 kubeadm.go:310] 
	W1210 00:55:13.492932  121784 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-481624 localhost] and IPs [192.168.50.207 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-481624 localhost] and IPs [192.168.50.207 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-481624 localhost] and IPs [192.168.50.207 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-481624 localhost] and IPs [192.168.50.207 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 00:55:13.492988  121784 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 00:55:14.346353  121784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:55:14.359734  121784 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:55:14.368619  121784 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:55:14.368639  121784 kubeadm.go:157] found existing configuration files:
	
	I1210 00:55:14.368690  121784 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:55:14.377102  121784 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:55:14.377154  121784 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:55:14.385631  121784 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:55:14.394104  121784 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:55:14.394159  121784 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:55:14.402959  121784 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:55:14.411234  121784 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:55:14.411286  121784 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:55:14.420008  121784 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:55:14.429313  121784 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:55:14.429375  121784 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:55:14.438621  121784 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:55:14.641496  121784 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 00:57:10.347594  121784 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 00:57:10.347700  121784 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1210 00:57:10.349206  121784 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 00:57:10.349319  121784 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:57:10.349458  121784 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:57:10.349583  121784 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:57:10.349725  121784 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 00:57:10.349843  121784 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:57:10.351749  121784 out.go:235]   - Generating certificates and keys ...
	I1210 00:57:10.351861  121784 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:57:10.351945  121784 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:57:10.352069  121784 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 00:57:10.352150  121784 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 00:57:10.352247  121784 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 00:57:10.352341  121784 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 00:57:10.352436  121784 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 00:57:10.352526  121784 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 00:57:10.352623  121784 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 00:57:10.352752  121784 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 00:57:10.352816  121784 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 00:57:10.352894  121784 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:57:10.352969  121784 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:57:10.353047  121784 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:57:10.353137  121784 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:57:10.353215  121784 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:57:10.353357  121784 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:57:10.353471  121784 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:57:10.353528  121784 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:57:10.353618  121784 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:57:10.355745  121784 out.go:235]   - Booting up control plane ...
	I1210 00:57:10.355827  121784 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:57:10.355924  121784 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:57:10.356026  121784 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:57:10.356145  121784 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:57:10.356278  121784 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 00:57:10.356334  121784 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 00:57:10.356410  121784 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:57:10.356624  121784 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:57:10.356693  121784 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:57:10.356869  121784 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:57:10.356936  121784 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:57:10.357165  121784 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:57:10.357245  121784 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:57:10.357472  121784 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:57:10.357576  121784 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:57:10.357829  121784 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:57:10.357839  121784 kubeadm.go:310] 
	I1210 00:57:10.357876  121784 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 00:57:10.357912  121784 kubeadm.go:310] 		timed out waiting for the condition
	I1210 00:57:10.357919  121784 kubeadm.go:310] 
	I1210 00:57:10.357947  121784 kubeadm.go:310] 	This error is likely caused by:
	I1210 00:57:10.357984  121784 kubeadm.go:310] 		- The kubelet is not running
	I1210 00:57:10.358108  121784 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 00:57:10.358128  121784 kubeadm.go:310] 
	I1210 00:57:10.358278  121784 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 00:57:10.358312  121784 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 00:57:10.358341  121784 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 00:57:10.358347  121784 kubeadm.go:310] 
	I1210 00:57:10.358442  121784 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 00:57:10.358529  121784 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 00:57:10.358537  121784 kubeadm.go:310] 
	I1210 00:57:10.358668  121784 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 00:57:10.358744  121784 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 00:57:10.358815  121784 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 00:57:10.358881  121784 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 00:57:10.358925  121784 kubeadm.go:310] 
	I1210 00:57:10.358951  121784 kubeadm.go:394] duration metric: took 3m54.728630682s to StartCluster
	I1210 00:57:10.359011  121784 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 00:57:10.359065  121784 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 00:57:10.401190  121784 cri.go:89] found id: ""
	I1210 00:57:10.401213  121784 logs.go:282] 0 containers: []
	W1210 00:57:10.401221  121784 logs.go:284] No container was found matching "kube-apiserver"
	I1210 00:57:10.401226  121784 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 00:57:10.401298  121784 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 00:57:10.436054  121784 cri.go:89] found id: ""
	I1210 00:57:10.436080  121784 logs.go:282] 0 containers: []
	W1210 00:57:10.436088  121784 logs.go:284] No container was found matching "etcd"
	I1210 00:57:10.436094  121784 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 00:57:10.436150  121784 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 00:57:10.470300  121784 cri.go:89] found id: ""
	I1210 00:57:10.470331  121784 logs.go:282] 0 containers: []
	W1210 00:57:10.470341  121784 logs.go:284] No container was found matching "coredns"
	I1210 00:57:10.470349  121784 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 00:57:10.470415  121784 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 00:57:10.502105  121784 cri.go:89] found id: ""
	I1210 00:57:10.502139  121784 logs.go:282] 0 containers: []
	W1210 00:57:10.502160  121784 logs.go:284] No container was found matching "kube-scheduler"
	I1210 00:57:10.502170  121784 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 00:57:10.502242  121784 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 00:57:10.534998  121784 cri.go:89] found id: ""
	I1210 00:57:10.535038  121784 logs.go:282] 0 containers: []
	W1210 00:57:10.535050  121784 logs.go:284] No container was found matching "kube-proxy"
	I1210 00:57:10.535062  121784 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 00:57:10.535150  121784 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 00:57:10.569799  121784 cri.go:89] found id: ""
	I1210 00:57:10.569832  121784 logs.go:282] 0 containers: []
	W1210 00:57:10.569844  121784 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 00:57:10.569853  121784 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 00:57:10.569920  121784 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 00:57:10.601230  121784 cri.go:89] found id: ""
	I1210 00:57:10.601269  121784 logs.go:282] 0 containers: []
	W1210 00:57:10.601282  121784 logs.go:284] No container was found matching "kindnet"
	I1210 00:57:10.601296  121784 logs.go:123] Gathering logs for dmesg ...
	I1210 00:57:10.601312  121784 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 00:57:10.613572  121784 logs.go:123] Gathering logs for describe nodes ...
	I1210 00:57:10.613600  121784 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 00:57:10.722307  121784 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 00:57:10.722336  121784 logs.go:123] Gathering logs for CRI-O ...
	I1210 00:57:10.722364  121784 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 00:57:10.830190  121784 logs.go:123] Gathering logs for container status ...
	I1210 00:57:10.830226  121784 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 00:57:10.865488  121784 logs.go:123] Gathering logs for kubelet ...
	I1210 00:57:10.865527  121784 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1210 00:57:10.917945  121784 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1210 00:57:10.918004  121784 out.go:270] * 
	* 
	W1210 00:57:10.918082  121784 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 00:57:10.918105  121784 out.go:270] * 
	* 
	W1210 00:57:10.919043  121784 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 00:57:10.922198  121784 out.go:201] 
	W1210 00:57:10.923419  121784 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 00:57:10.923456  121784 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 00:57:10.923484  121784 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 00:57:10.924882  121784 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-481624 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-481624
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-481624: (1.321920677s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-481624 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-481624 status --format={{.Host}}: exit status 7 (68.241273ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-481624 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-481624 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.875996054s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-481624 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-481624 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-481624 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (90.403094ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-481624] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-481624
	    minikube start -p kubernetes-upgrade-481624 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4816242 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-481624 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-481624 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-481624 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.886733713s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-12-10 00:58:46.299140675 +0000 UTC m=+4519.378778820
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-481624 -n kubernetes-upgrade-481624
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-481624 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-481624 logs -n 25: (1.674702922s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p NoKubernetes-971901                | NoKubernetes-971901       | jenkins | v1.34.0 | 10 Dec 24 00:55 UTC | 10 Dec 24 00:55 UTC |
	| start   | -p NoKubernetes-971901                | NoKubernetes-971901       | jenkins | v1.34.0 | 10 Dec 24 00:55 UTC | 10 Dec 24 00:55 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-988830 stop           | minikube                  | jenkins | v1.26.0 | 10 Dec 24 00:55 UTC | 10 Dec 24 00:55 UTC |
	| delete  | -p running-upgrade-993049             | running-upgrade-993049    | jenkins | v1.34.0 | 10 Dec 24 00:55 UTC | 10 Dec 24 00:55 UTC |
	| start   | -p stopped-upgrade-988830             | stopped-upgrade-988830    | jenkins | v1.34.0 | 10 Dec 24 00:55 UTC | 10 Dec 24 00:56 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-190222 --memory=2048         | pause-190222              | jenkins | v1.34.0 | 10 Dec 24 00:55 UTC | 10 Dec 24 00:57 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-971901 sudo           | NoKubernetes-971901       | jenkins | v1.34.0 | 10 Dec 24 00:55 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-971901                | NoKubernetes-971901       | jenkins | v1.34.0 | 10 Dec 24 00:55 UTC | 10 Dec 24 00:55 UTC |
	| start   | -p cert-expiration-290541             | cert-expiration-290541    | jenkins | v1.34.0 | 10 Dec 24 00:55 UTC | 10 Dec 24 00:57 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-988830             | stopped-upgrade-988830    | jenkins | v1.34.0 | 10 Dec 24 00:56 UTC | 10 Dec 24 00:56 UTC |
	| start   | -p force-systemd-flag-887293          | force-systemd-flag-887293 | jenkins | v1.34.0 | 10 Dec 24 00:56 UTC | 10 Dec 24 00:57 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-481624          | kubernetes-upgrade-481624 | jenkins | v1.34.0 | 10 Dec 24 00:57 UTC | 10 Dec 24 00:57 UTC |
	| start   | -p kubernetes-upgrade-481624          | kubernetes-upgrade-481624 | jenkins | v1.34.0 | 10 Dec 24 00:57 UTC | 10 Dec 24 00:57 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-190222                       | pause-190222              | jenkins | v1.34.0 | 10 Dec 24 00:57 UTC | 10 Dec 24 00:58 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-887293 ssh cat     | force-systemd-flag-887293 | jenkins | v1.34.0 | 10 Dec 24 00:57 UTC | 10 Dec 24 00:57 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-887293          | force-systemd-flag-887293 | jenkins | v1.34.0 | 10 Dec 24 00:57 UTC | 10 Dec 24 00:57 UTC |
	| start   | -p cert-options-086522                | cert-options-086522       | jenkins | v1.34.0 | 10 Dec 24 00:57 UTC | 10 Dec 24 00:58 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-481624          | kubernetes-upgrade-481624 | jenkins | v1.34.0 | 10 Dec 24 00:57 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-481624          | kubernetes-upgrade-481624 | jenkins | v1.34.0 | 10 Dec 24 00:57 UTC | 10 Dec 24 00:58 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p pause-190222                       | pause-190222              | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 00:58 UTC |
	| ssh     | cert-options-086522 ssh               | cert-options-086522       | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 00:58 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-086522 -- sudo        | cert-options-086522       | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 00:58 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-094470             | old-k8s-version-094470    | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| delete  | -p cert-options-086522                | cert-options-086522       | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 00:58 UTC |
	| start   | -p no-preload-584179                  | no-preload-584179         | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2          |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/10 00:58:27
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 00:58:27.235012  129779 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:58:27.235265  129779 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:58:27.235275  129779 out.go:358] Setting ErrFile to fd 2...
	I1210 00:58:27.235279  129779 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:58:27.235454  129779 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 00:58:27.235965  129779 out.go:352] Setting JSON to false
	I1210 00:58:27.236859  129779 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":9658,"bootTime":1733782649,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:58:27.236921  129779 start.go:139] virtualization: kvm guest
	I1210 00:58:27.238956  129779 out.go:177] * [no-preload-584179] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 00:58:27.240646  129779 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 00:58:27.240723  129779 notify.go:220] Checking for updates...
	I1210 00:58:27.244088  129779 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:58:27.245541  129779 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:58:27.246780  129779 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:58:27.247949  129779 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:58:27.249238  129779 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:58:27.250834  129779 config.go:182] Loaded profile config "cert-expiration-290541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:58:27.250932  129779 config.go:182] Loaded profile config "kubernetes-upgrade-481624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:58:27.251022  129779 config.go:182] Loaded profile config "old-k8s-version-094470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1210 00:58:27.251102  129779 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:58:27.991981  129779 out.go:177] * Using the kvm2 driver based on user configuration
	I1210 00:58:27.993119  129779 start.go:297] selected driver: kvm2
	I1210 00:58:27.993134  129779 start.go:901] validating driver "kvm2" against <nil>
	I1210 00:58:27.993149  129779 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:58:27.993921  129779 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:58:27.993991  129779 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 00:58:28.009131  129779 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1210 00:58:28.009192  129779 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1210 00:58:28.009481  129779 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:58:28.009510  129779 cni.go:84] Creating CNI manager for ""
	I1210 00:58:28.009553  129779 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:58:28.009562  129779 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 00:58:28.009612  129779 start.go:340] cluster config:
	{Name:no-preload-584179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-584179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:58:28.009726  129779 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:58:28.011565  129779 out.go:177] * Starting "no-preload-584179" primary control-plane node in "no-preload-584179" cluster
	I1210 00:58:25.956095  129622 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1210 00:58:25.956243  129622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:58:25.956282  129622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:58:25.970268  129622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34783
	I1210 00:58:25.970750  129622 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:58:25.971315  129622 main.go:141] libmachine: Using API Version  1
	I1210 00:58:25.971341  129622 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:58:25.971676  129622 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:58:25.971897  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 00:58:25.972057  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 00:58:25.972222  129622 start.go:159] libmachine.API.Create for "old-k8s-version-094470" (driver="kvm2")
	I1210 00:58:25.972255  129622 client.go:168] LocalClient.Create starting
	I1210 00:58:25.972280  129622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem
	I1210 00:58:25.972306  129622 main.go:141] libmachine: Decoding PEM data...
	I1210 00:58:25.972322  129622 main.go:141] libmachine: Parsing certificate...
	I1210 00:58:25.972375  129622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem
	I1210 00:58:25.972393  129622 main.go:141] libmachine: Decoding PEM data...
	I1210 00:58:25.972405  129622 main.go:141] libmachine: Parsing certificate...
	I1210 00:58:25.972425  129622 main.go:141] libmachine: Running pre-create checks...
	I1210 00:58:25.972434  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .PreCreateCheck
	I1210 00:58:25.972777  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetConfigRaw
	I1210 00:58:25.973220  129622 main.go:141] libmachine: Creating machine...
	I1210 00:58:25.973238  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .Create
	I1210 00:58:25.973385  129622 main.go:141] libmachine: (old-k8s-version-094470) Creating KVM machine...
	I1210 00:58:25.974732  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | found existing default KVM network
	I1210 00:58:25.976120  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:25.975963  129664 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:5d:1c:a4} reservation:<nil>}
	I1210 00:58:25.977051  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:25.976915  129664 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:48:50:21} reservation:<nil>}
	I1210 00:58:25.978304  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:25.978225  129664 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000380740}
	I1210 00:58:25.978339  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | created network xml: 
	I1210 00:58:25.978352  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | <network>
	I1210 00:58:25.978364  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG |   <name>mk-old-k8s-version-094470</name>
	I1210 00:58:25.978378  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG |   <dns enable='no'/>
	I1210 00:58:25.978393  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG |   
	I1210 00:58:25.978405  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1210 00:58:25.978427  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG |     <dhcp>
	I1210 00:58:25.978439  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1210 00:58:25.978449  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG |     </dhcp>
	I1210 00:58:25.978479  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG |   </ip>
	I1210 00:58:25.978502  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG |   
	I1210 00:58:25.978509  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | </network>
	I1210 00:58:25.978519  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | 
	I1210 00:58:25.983410  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | trying to create private KVM network mk-old-k8s-version-094470 192.168.61.0/24...
	I1210 00:58:26.063722  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | private KVM network mk-old-k8s-version-094470 192.168.61.0/24 created
	I1210 00:58:26.063761  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:26.063695  129664 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:58:26.063777  129622 main.go:141] libmachine: (old-k8s-version-094470) Setting up store path in /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470 ...
	I1210 00:58:26.063800  129622 main.go:141] libmachine: (old-k8s-version-094470) Building disk image from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1210 00:58:26.063816  129622 main.go:141] libmachine: (old-k8s-version-094470) Downloading /home/jenkins/minikube-integration/20062-79135/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1210 00:58:26.347854  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:26.347709  129664 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa...
	I1210 00:58:26.619291  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:26.619164  129664 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/old-k8s-version-094470.rawdisk...
	I1210 00:58:26.619319  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | Writing magic tar header
	I1210 00:58:26.619346  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | Writing SSH key tar header
	I1210 00:58:26.619432  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:26.619368  129664 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470 ...
	I1210 00:58:26.619674  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470
	I1210 00:58:26.619718  129622 main.go:141] libmachine: (old-k8s-version-094470) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470 (perms=drwx------)
	I1210 00:58:26.619742  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines
	I1210 00:58:26.619755  129622 main.go:141] libmachine: (old-k8s-version-094470) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines (perms=drwxr-xr-x)
	I1210 00:58:26.619780  129622 main.go:141] libmachine: (old-k8s-version-094470) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube (perms=drwxr-xr-x)
	I1210 00:58:26.619790  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:58:26.619800  129622 main.go:141] libmachine: (old-k8s-version-094470) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135 (perms=drwxrwxr-x)
	I1210 00:58:26.619829  129622 main.go:141] libmachine: (old-k8s-version-094470) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 00:58:26.619841  129622 main.go:141] libmachine: (old-k8s-version-094470) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 00:58:26.619851  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135
	I1210 00:58:26.619859  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1210 00:58:26.619867  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | Checking permissions on dir: /home/jenkins
	I1210 00:58:26.619875  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | Checking permissions on dir: /home
	I1210 00:58:26.619882  129622 main.go:141] libmachine: (old-k8s-version-094470) Creating domain...
	I1210 00:58:26.619887  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | Skipping /home - not owner
	I1210 00:58:27.036255  129622 main.go:141] libmachine: (old-k8s-version-094470) define libvirt domain using xml: 
	I1210 00:58:27.036287  129622 main.go:141] libmachine: (old-k8s-version-094470) <domain type='kvm'>
	I1210 00:58:27.036326  129622 main.go:141] libmachine: (old-k8s-version-094470)   <name>old-k8s-version-094470</name>
	I1210 00:58:27.036348  129622 main.go:141] libmachine: (old-k8s-version-094470)   <memory unit='MiB'>2200</memory>
	I1210 00:58:27.036362  129622 main.go:141] libmachine: (old-k8s-version-094470)   <vcpu>2</vcpu>
	I1210 00:58:27.036372  129622 main.go:141] libmachine: (old-k8s-version-094470)   <features>
	I1210 00:58:27.036381  129622 main.go:141] libmachine: (old-k8s-version-094470)     <acpi/>
	I1210 00:58:27.036396  129622 main.go:141] libmachine: (old-k8s-version-094470)     <apic/>
	I1210 00:58:27.036404  129622 main.go:141] libmachine: (old-k8s-version-094470)     <pae/>
	I1210 00:58:27.036415  129622 main.go:141] libmachine: (old-k8s-version-094470)     
	I1210 00:58:27.036437  129622 main.go:141] libmachine: (old-k8s-version-094470)   </features>
	I1210 00:58:27.036465  129622 main.go:141] libmachine: (old-k8s-version-094470)   <cpu mode='host-passthrough'>
	I1210 00:58:27.036477  129622 main.go:141] libmachine: (old-k8s-version-094470)   
	I1210 00:58:27.036483  129622 main.go:141] libmachine: (old-k8s-version-094470)   </cpu>
	I1210 00:58:27.036500  129622 main.go:141] libmachine: (old-k8s-version-094470)   <os>
	I1210 00:58:27.036514  129622 main.go:141] libmachine: (old-k8s-version-094470)     <type>hvm</type>
	I1210 00:58:27.036524  129622 main.go:141] libmachine: (old-k8s-version-094470)     <boot dev='cdrom'/>
	I1210 00:58:27.036531  129622 main.go:141] libmachine: (old-k8s-version-094470)     <boot dev='hd'/>
	I1210 00:58:27.036540  129622 main.go:141] libmachine: (old-k8s-version-094470)     <bootmenu enable='no'/>
	I1210 00:58:27.036546  129622 main.go:141] libmachine: (old-k8s-version-094470)   </os>
	I1210 00:58:27.036555  129622 main.go:141] libmachine: (old-k8s-version-094470)   <devices>
	I1210 00:58:27.036563  129622 main.go:141] libmachine: (old-k8s-version-094470)     <disk type='file' device='cdrom'>
	I1210 00:58:27.036577  129622 main.go:141] libmachine: (old-k8s-version-094470)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/boot2docker.iso'/>
	I1210 00:58:27.036593  129622 main.go:141] libmachine: (old-k8s-version-094470)       <target dev='hdc' bus='scsi'/>
	I1210 00:58:27.036603  129622 main.go:141] libmachine: (old-k8s-version-094470)       <readonly/>
	I1210 00:58:27.036609  129622 main.go:141] libmachine: (old-k8s-version-094470)     </disk>
	I1210 00:58:27.036618  129622 main.go:141] libmachine: (old-k8s-version-094470)     <disk type='file' device='disk'>
	I1210 00:58:27.036628  129622 main.go:141] libmachine: (old-k8s-version-094470)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1210 00:58:27.036642  129622 main.go:141] libmachine: (old-k8s-version-094470)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/old-k8s-version-094470.rawdisk'/>
	I1210 00:58:27.036650  129622 main.go:141] libmachine: (old-k8s-version-094470)       <target dev='hda' bus='virtio'/>
	I1210 00:58:27.036659  129622 main.go:141] libmachine: (old-k8s-version-094470)     </disk>
	I1210 00:58:27.036766  129622 main.go:141] libmachine: (old-k8s-version-094470)     <interface type='network'>
	I1210 00:58:27.036777  129622 main.go:141] libmachine: (old-k8s-version-094470)       <source network='mk-old-k8s-version-094470'/>
	I1210 00:58:27.036784  129622 main.go:141] libmachine: (old-k8s-version-094470)       <model type='virtio'/>
	I1210 00:58:27.036791  129622 main.go:141] libmachine: (old-k8s-version-094470)     </interface>
	I1210 00:58:27.036799  129622 main.go:141] libmachine: (old-k8s-version-094470)     <interface type='network'>
	I1210 00:58:27.036807  129622 main.go:141] libmachine: (old-k8s-version-094470)       <source network='default'/>
	I1210 00:58:27.036824  129622 main.go:141] libmachine: (old-k8s-version-094470)       <model type='virtio'/>
	I1210 00:58:27.036844  129622 main.go:141] libmachine: (old-k8s-version-094470)     </interface>
	I1210 00:58:27.036860  129622 main.go:141] libmachine: (old-k8s-version-094470)     <serial type='pty'>
	I1210 00:58:27.036869  129622 main.go:141] libmachine: (old-k8s-version-094470)       <target port='0'/>
	I1210 00:58:27.036874  129622 main.go:141] libmachine: (old-k8s-version-094470)     </serial>
	I1210 00:58:27.036880  129622 main.go:141] libmachine: (old-k8s-version-094470)     <console type='pty'>
	I1210 00:58:27.036887  129622 main.go:141] libmachine: (old-k8s-version-094470)       <target type='serial' port='0'/>
	I1210 00:58:27.036896  129622 main.go:141] libmachine: (old-k8s-version-094470)     </console>
	I1210 00:58:27.036904  129622 main.go:141] libmachine: (old-k8s-version-094470)     <rng model='virtio'>
	I1210 00:58:27.036914  129622 main.go:141] libmachine: (old-k8s-version-094470)       <backend model='random'>/dev/random</backend>
	I1210 00:58:27.036920  129622 main.go:141] libmachine: (old-k8s-version-094470)     </rng>
	I1210 00:58:27.036931  129622 main.go:141] libmachine: (old-k8s-version-094470)     
	I1210 00:58:27.036938  129622 main.go:141] libmachine: (old-k8s-version-094470)     
	I1210 00:58:27.036946  129622 main.go:141] libmachine: (old-k8s-version-094470)   </devices>
	I1210 00:58:27.036952  129622 main.go:141] libmachine: (old-k8s-version-094470) </domain>
	I1210 00:58:27.036964  129622 main.go:141] libmachine: (old-k8s-version-094470) 
	I1210 00:58:27.044376  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:69:03:7a in network default
	I1210 00:58:27.045003  129622 main.go:141] libmachine: (old-k8s-version-094470) Ensuring networks are active...
	I1210 00:58:27.045028  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:27.045895  129622 main.go:141] libmachine: (old-k8s-version-094470) Ensuring network default is active
	I1210 00:58:27.046196  129622 main.go:141] libmachine: (old-k8s-version-094470) Ensuring network mk-old-k8s-version-094470 is active
	I1210 00:58:27.046764  129622 main.go:141] libmachine: (old-k8s-version-094470) Getting domain xml...
	I1210 00:58:27.047506  129622 main.go:141] libmachine: (old-k8s-version-094470) Creating domain...
	I1210 00:58:28.289873  129622 main.go:141] libmachine: (old-k8s-version-094470) Waiting to get IP...
	I1210 00:58:28.290835  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:28.291271  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:28.291298  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:28.291250  129664 retry.go:31] will retry after 200.837698ms: waiting for machine to come up
	I1210 00:58:28.493600  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:28.494060  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:28.494089  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:28.494018  129664 retry.go:31] will retry after 273.268694ms: waiting for machine to come up
	I1210 00:58:28.768426  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:28.768967  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:28.769011  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:28.768914  129664 retry.go:31] will retry after 332.226861ms: waiting for machine to come up
	I1210 00:58:29.102323  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:29.102785  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:29.102816  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:29.102742  129664 retry.go:31] will retry after 585.665087ms: waiting for machine to come up
	I1210 00:58:29.690126  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:29.690863  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:29.690892  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:29.690830  129664 retry.go:31] will retry after 601.766804ms: waiting for machine to come up
	I1210 00:58:30.294665  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:30.295116  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:30.295138  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:30.295053  129664 retry.go:31] will retry after 765.321784ms: waiting for machine to come up
	I1210 00:58:28.012896  129779 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:58:28.013032  129779 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/config.json ...
	I1210 00:58:28.013061  129779 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/config.json: {Name:mkb5a193fcc54ebe6174e2bfffc12879c7dfa8fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:58:28.013173  129779 cache.go:107] acquiring lock: {Name:mkd28ddf8314e56ae6523089d1bca2389b622b83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:58:28.013243  129779 cache.go:107] acquiring lock: {Name:mk7ea88c9361bef7326f1028281384b4ba9d7b3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:58:28.013281  129779 start.go:360] acquireMachinesLock for no-preload-584179: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:58:28.013371  129779 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1210 00:58:28.013361  129779 cache.go:107] acquiring lock: {Name:mk863d3e72ffe215084bd19b49f771593db5757b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:58:28.013354  129779 cache.go:107] acquiring lock: {Name:mke690478947db0d1ed71353b7e5c60a9a7bde6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:58:28.013405  129779 cache.go:107] acquiring lock: {Name:mk73364b9836637344227645056c3142a552841e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:58:28.013396  129779 cache.go:107] acquiring lock: {Name:mkdfdc843ec53657e2a22caa4eecac87f68210db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:58:28.013376  129779 cache.go:115] /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1210 00:58:28.013189  129779 cache.go:107] acquiring lock: {Name:mk419f6f0d15829cbcd5dbab41abb06826a22c92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:58:28.013541  129779 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 363.006µs
	I1210 00:58:28.013745  129779 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1210 00:58:28.013182  129779 cache.go:107] acquiring lock: {Name:mkd8172d4339adf4d0ea91d6a43891fcca9dd0bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:58:28.013612  129779 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1210 00:58:28.013847  129779 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1210 00:58:28.013655  129779 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 00:58:28.013683  129779 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1210 00:58:28.013701  129779 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1210 00:58:28.013724  129779 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1210 00:58:28.014730  129779 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 00:58:28.014806  129779 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1210 00:58:28.014822  129779 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1210 00:58:28.014888  129779 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1210 00:58:28.014900  129779 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1210 00:58:28.015137  129779 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1210 00:58:28.015214  129779 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1210 00:58:28.166643  129779 cache.go:162] opening:  /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1210 00:58:28.171475  129779 cache.go:162] opening:  /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1210 00:58:28.172739  129779 cache.go:162] opening:  /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1210 00:58:28.177521  129779 cache.go:162] opening:  /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1210 00:58:28.189218  129779 cache.go:162] opening:  /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I1210 00:58:28.190466  129779 cache.go:162] opening:  /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1210 00:58:28.212768  129779 cache.go:162] opening:  /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1210 00:58:28.257104  129779 cache.go:157] /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I1210 00:58:28.257124  129779 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 243.881109ms
	I1210 00:58:28.257133  129779 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I1210 00:58:28.577743  129779 cache.go:157] /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1210 00:58:28.577773  129779 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2" took 564.595863ms
	I1210 00:58:28.577785  129779 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1210 00:58:29.585253  129779 cache.go:157] /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1210 00:58:29.585302  129779 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 1.571944642s
	I1210 00:58:29.585344  129779 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1210 00:58:29.599438  129779 cache.go:157] /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1210 00:58:29.599465  129779 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2" took 1.586105279s
	I1210 00:58:29.599479  129779 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1210 00:58:29.685646  129779 cache.go:157] /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1210 00:58:29.685678  129779 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2" took 1.672379805s
	I1210 00:58:29.685693  129779 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1210 00:58:29.695140  129779 cache.go:157] /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1210 00:58:29.695176  129779 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2" took 1.682011555s
	I1210 00:58:29.695193  129779 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1210 00:58:29.970488  129779 cache.go:157] /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 exists
	I1210 00:58:29.970515  129779 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0" took 1.957218055s
	I1210 00:58:29.970527  129779 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1210 00:58:29.970543  129779 cache.go:87] Successfully saved all images to host disk.
	I1210 00:58:31.062519  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:31.062916  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:31.062947  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:31.062859  129664 retry.go:31] will retry after 887.24548ms: waiting for machine to come up
	I1210 00:58:31.951885  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:31.952435  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:31.952468  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:31.952380  129664 retry.go:31] will retry after 1.396905116s: waiting for machine to come up
	I1210 00:58:33.350891  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:33.351284  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:33.351332  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:33.351234  129664 retry.go:31] will retry after 1.265722199s: waiting for machine to come up
	I1210 00:58:34.618695  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:34.619106  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:34.619134  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:34.619059  129664 retry.go:31] will retry after 1.981614225s: waiting for machine to come up
	I1210 00:58:36.602233  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:36.602770  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:36.602795  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:36.602713  129664 retry.go:31] will retry after 2.224825931s: waiting for machine to come up
	I1210 00:58:38.829071  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:38.829597  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:38.829629  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:38.829534  129664 retry.go:31] will retry after 2.685492556s: waiting for machine to come up
	I1210 00:58:39.777547  129030 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 f02b4cd9a7fcdfba7ec35a644b881139531b36d9a266183c8acc78203ee72593 654b3bad85e12ee9b5103545e48c6b7b869a2952392406c52ed2dec509cfe108 2ed3b025382484b3b1531ada5e9a3bf18b28a37055020bfb7f70d0521c7fec58 8b9f9d31ee855aa477aa5c15141d6a10e9073b56a9d4bb935878d2035d42456e 83146d4b359076c57397893854f3c6ac13707b14433f9f336a6a939ad0b890b7 6d0fbf7a2c21cc793ca3d86d4dec18863ac065878b0bd24d35fa66a168f8e9bd ae67ba6adfb68372b41990e2afd37a55475cc0fc989c2fe107e7704548a91dc0 f5d9c7b11733c24f10eadeb76a65a2c45e30b605c0565d474ae42aafb5b813e2 2abc651c343f895838aebac40b514ab3b17d13a190573fbeab34401a11e539f6 67f30242e4410647d0c4c9109c5c11df0de55bd5218be940d3fdca2774c6b209 79833e2212c0cca57ef56ddba2c82934aaee3ce5951e9ac2eaa5ad408325d600 9bfb5f10d3ba7bb54be7a240675dd3b0d8323055eaa98b6a2edb9cfac8d7c9e1 d12fa92f275ea447bc14bb042314691708d44814d33249eccdc3022928be1701 e0edefcccd7a911062ce20a2b81bd11ba558456fa2a3858d3fc2e8cd0ca1ed2e: (20.7
18842179s)
	W1210 00:58:39.777652  129030 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 f02b4cd9a7fcdfba7ec35a644b881139531b36d9a266183c8acc78203ee72593 654b3bad85e12ee9b5103545e48c6b7b869a2952392406c52ed2dec509cfe108 2ed3b025382484b3b1531ada5e9a3bf18b28a37055020bfb7f70d0521c7fec58 8b9f9d31ee855aa477aa5c15141d6a10e9073b56a9d4bb935878d2035d42456e 83146d4b359076c57397893854f3c6ac13707b14433f9f336a6a939ad0b890b7 6d0fbf7a2c21cc793ca3d86d4dec18863ac065878b0bd24d35fa66a168f8e9bd ae67ba6adfb68372b41990e2afd37a55475cc0fc989c2fe107e7704548a91dc0 f5d9c7b11733c24f10eadeb76a65a2c45e30b605c0565d474ae42aafb5b813e2 2abc651c343f895838aebac40b514ab3b17d13a190573fbeab34401a11e539f6 67f30242e4410647d0c4c9109c5c11df0de55bd5218be940d3fdca2774c6b209 79833e2212c0cca57ef56ddba2c82934aaee3ce5951e9ac2eaa5ad408325d600 9bfb5f10d3ba7bb54be7a240675dd3b0d8323055eaa98b6a2edb9cfac8d7c9e1 d12fa92f275ea447bc14bb042314691708d44814d33249eccdc3022928be1701 e0edef
cccd7a911062ce20a2b81bd11ba558456fa2a3858d3fc2e8cd0ca1ed2e: Process exited with status 1
	stdout:
	f02b4cd9a7fcdfba7ec35a644b881139531b36d9a266183c8acc78203ee72593
	654b3bad85e12ee9b5103545e48c6b7b869a2952392406c52ed2dec509cfe108
	2ed3b025382484b3b1531ada5e9a3bf18b28a37055020bfb7f70d0521c7fec58
	8b9f9d31ee855aa477aa5c15141d6a10e9073b56a9d4bb935878d2035d42456e
	83146d4b359076c57397893854f3c6ac13707b14433f9f336a6a939ad0b890b7
	6d0fbf7a2c21cc793ca3d86d4dec18863ac065878b0bd24d35fa66a168f8e9bd
	ae67ba6adfb68372b41990e2afd37a55475cc0fc989c2fe107e7704548a91dc0
	
	stderr:
	E1210 00:58:39.760260    3174 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5d9c7b11733c24f10eadeb76a65a2c45e30b605c0565d474ae42aafb5b813e2\": container with ID starting with f5d9c7b11733c24f10eadeb76a65a2c45e30b605c0565d474ae42aafb5b813e2 not found: ID does not exist" containerID="f5d9c7b11733c24f10eadeb76a65a2c45e30b605c0565d474ae42aafb5b813e2"
	time="2024-12-10T00:58:39Z" level=fatal msg="stopping the container \"f5d9c7b11733c24f10eadeb76a65a2c45e30b605c0565d474ae42aafb5b813e2\": rpc error: code = NotFound desc = could not find container \"f5d9c7b11733c24f10eadeb76a65a2c45e30b605c0565d474ae42aafb5b813e2\": container with ID starting with f5d9c7b11733c24f10eadeb76a65a2c45e30b605c0565d474ae42aafb5b813e2 not found: ID does not exist"
	I1210 00:58:39.777734  129030 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 00:58:39.821924  129030 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:58:39.832142  129030 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Dec 10 00:57 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Dec 10 00:57 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5759 Dec 10 00:57 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Dec 10 00:57 /etc/kubernetes/scheduler.conf
	
	I1210 00:58:39.832278  129030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:58:39.840680  129030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:58:39.849259  129030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:58:39.857590  129030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 00:58:39.857644  129030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:58:39.866379  129030 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:58:39.874589  129030 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1210 00:58:39.874636  129030 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:58:39.883319  129030 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:58:39.891589  129030 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:58:39.942646  129030 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:58:40.972964  129030 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.030270044s)
	I1210 00:58:40.973009  129030 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:58:41.211852  129030 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:58:41.282787  129030 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:58:41.365885  129030 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:58:41.366065  129030 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:58:41.866073  129030 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:58:41.895336  129030 api_server.go:72] duration metric: took 529.449104ms to wait for apiserver process to appear ...
	I1210 00:58:41.895370  129030 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:58:41.895397  129030 api_server.go:253] Checking apiserver healthz at https://192.168.50.207:8443/healthz ...
	I1210 00:58:41.895905  129030 api_server.go:269] stopped: https://192.168.50.207:8443/healthz: Get "https://192.168.50.207:8443/healthz": dial tcp 192.168.50.207:8443: connect: connection refused
	I1210 00:58:42.395931  129030 api_server.go:253] Checking apiserver healthz at https://192.168.50.207:8443/healthz ...
	I1210 00:58:43.531078  129030 api_server.go:279] https://192.168.50.207:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 00:58:43.531115  129030 api_server.go:103] status: https://192.168.50.207:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 00:58:43.531132  129030 api_server.go:253] Checking apiserver healthz at https://192.168.50.207:8443/healthz ...
	I1210 00:58:43.608471  129030 api_server.go:279] https://192.168.50.207:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 00:58:43.608500  129030 api_server.go:103] status: https://192.168.50.207:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 00:58:43.895754  129030 api_server.go:253] Checking apiserver healthz at https://192.168.50.207:8443/healthz ...
	I1210 00:58:43.901984  129030 api_server.go:279] https://192.168.50.207:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 00:58:43.902007  129030 api_server.go:103] status: https://192.168.50.207:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 00:58:44.396121  129030 api_server.go:253] Checking apiserver healthz at https://192.168.50.207:8443/healthz ...
	I1210 00:58:44.401512  129030 api_server.go:279] https://192.168.50.207:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 00:58:44.401542  129030 api_server.go:103] status: https://192.168.50.207:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 00:58:44.896175  129030 api_server.go:253] Checking apiserver healthz at https://192.168.50.207:8443/healthz ...
	I1210 00:58:44.902289  129030 api_server.go:279] https://192.168.50.207:8443/healthz returned 200:
	ok
	I1210 00:58:44.909457  129030 api_server.go:141] control plane version: v1.31.2
	I1210 00:58:44.909480  129030 api_server.go:131] duration metric: took 3.01410339s to wait for apiserver health ...
	I1210 00:58:44.909490  129030 cni.go:84] Creating CNI manager for ""
	I1210 00:58:44.909496  129030 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:58:44.911314  129030 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 00:58:44.912687  129030 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 00:58:44.924648  129030 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 00:58:44.946197  129030 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:58:44.946299  129030 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1210 00:58:44.946323  129030 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1210 00:58:44.957585  129030 system_pods.go:59] 8 kube-system pods found
	I1210 00:58:44.957613  129030 system_pods.go:61] "coredns-7c65d6cfc9-tql8s" [a05ce2a0-8072-4e8e-8986-3645fcccf3d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 00:58:44.957619  129030 system_pods.go:61] "coredns-7c65d6cfc9-xpv54" [adf10e6a-56ac-4968-98cb-bfefe1818033] Running
	I1210 00:58:44.957625  129030 system_pods.go:61] "etcd-kubernetes-upgrade-481624" [fb6323a8-db00-4f59-a6f3-f005ad663000] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 00:58:44.957631  129030 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-481624" [41e94ed0-7986-4705-8112-f38fa86a1cb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 00:58:44.957642  129030 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-481624" [dde273cf-416c-4812-b9cf-01267dd69fbb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 00:58:44.957649  129030 system_pods.go:61] "kube-proxy-gsztm" [3fd7dab5-b234-45bc-88aa-2d4ade84e60e] Running
	I1210 00:58:44.957655  129030 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-481624" [64704c4e-6a9f-4b10-a42a-114612487ee9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 00:58:44.957661  129030 system_pods.go:61] "storage-provisioner" [290e4ca8-2a66-4523-8ce9-17a2dc5d6054] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 00:58:44.957670  129030 system_pods.go:74] duration metric: took 11.454924ms to wait for pod list to return data ...
	I1210 00:58:44.957677  129030 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:58:44.961584  129030 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:58:44.961609  129030 node_conditions.go:123] node cpu capacity is 2
	I1210 00:58:44.961620  129030 node_conditions.go:105] duration metric: took 3.938248ms to run NodePressure ...
	I1210 00:58:44.961642  129030 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 00:58:45.264371  129030 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 00:58:45.275738  129030 ops.go:34] apiserver oom_adj: -16
	I1210 00:58:45.275759  129030 kubeadm.go:597] duration metric: took 26.371783563s to restartPrimaryControlPlane
	I1210 00:58:45.275771  129030 kubeadm.go:394] duration metric: took 26.602975479s to StartCluster
	I1210 00:58:45.275800  129030 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:58:45.275887  129030 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:58:45.277207  129030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:58:45.277434  129030 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.207 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:58:45.277543  129030 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 00:58:45.277638  129030 config.go:182] Loaded profile config "kubernetes-upgrade-481624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:58:45.277658  129030 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-481624"
	I1210 00:58:45.277687  129030 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-481624"
	W1210 00:58:45.277697  129030 addons.go:243] addon storage-provisioner should already be in state true
	I1210 00:58:45.277691  129030 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-481624"
	I1210 00:58:45.277724  129030 host.go:66] Checking if "kubernetes-upgrade-481624" exists ...
	I1210 00:58:45.277725  129030 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-481624"
	I1210 00:58:45.278005  129030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:58:45.278044  129030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:58:45.278154  129030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:58:45.278204  129030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:58:45.278820  129030 out.go:177] * Verifying Kubernetes components...
	I1210 00:58:45.279996  129030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:58:45.293088  129030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40389
	I1210 00:58:45.293360  129030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46861
	I1210 00:58:45.293502  129030 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:58:45.293726  129030 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:58:45.293969  129030 main.go:141] libmachine: Using API Version  1
	I1210 00:58:45.293994  129030 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:58:45.294197  129030 main.go:141] libmachine: Using API Version  1
	I1210 00:58:45.294221  129030 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:58:45.294284  129030 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:58:45.294479  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetState
	I1210 00:58:45.294527  129030 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:58:45.295061  129030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:58:45.295106  129030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:58:45.297099  129030 kapi.go:59] client config for kubernetes-upgrade-481624: &rest.Config{Host:"https://192.168.50.207:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/client.crt", KeyFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/client.key", CAFile:"/home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil
), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1210 00:58:45.297491  129030 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-481624"
	W1210 00:58:45.297513  129030 addons.go:243] addon default-storageclass should already be in state true
	I1210 00:58:45.297562  129030 host.go:66] Checking if "kubernetes-upgrade-481624" exists ...
	I1210 00:58:45.297942  129030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:58:45.297990  129030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:58:45.310339  129030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I1210 00:58:45.310781  129030 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:58:45.311241  129030 main.go:141] libmachine: Using API Version  1
	I1210 00:58:45.311264  129030 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:58:45.311601  129030 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:58:45.311826  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetState
	I1210 00:58:45.311855  129030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34311
	I1210 00:58:45.312189  129030 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:58:45.312853  129030 main.go:141] libmachine: Using API Version  1
	I1210 00:58:45.312879  129030 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:58:45.313196  129030 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:58:45.313741  129030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:58:45.313791  129030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:58:45.313869  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .DriverName
	I1210 00:58:45.315822  129030 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:58:41.516677  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:41.517114  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:41.517142  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:41.517068  129664 retry.go:31] will retry after 4.456616812s: waiting for machine to come up
	I1210 00:58:45.317169  129030 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:58:45.317190  129030 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 00:58:45.317205  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHHostname
	I1210 00:58:45.320748  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:58:45.321227  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:36:d7", ip: ""} in network mk-kubernetes-upgrade-481624: {Iface:virbr2 ExpiryTime:2024-12-10 01:57:28 +0000 UTC Type:0 Mac:52:54:00:76:36:d7 Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:kubernetes-upgrade-481624 Clientid:01:52:54:00:76:36:d7}
	I1210 00:58:45.321253  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined IP address 192.168.50.207 and MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:58:45.321536  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHPort
	I1210 00:58:45.321699  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHKeyPath
	I1210 00:58:45.321835  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHUsername
	I1210 00:58:45.321975  129030 sshutil.go:53] new ssh client: &{IP:192.168.50.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/kubernetes-upgrade-481624/id_rsa Username:docker}
	I1210 00:58:45.329377  129030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38207
	I1210 00:58:45.329863  129030 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:58:45.330312  129030 main.go:141] libmachine: Using API Version  1
	I1210 00:58:45.330332  129030 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:58:45.330676  129030 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:58:45.330858  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetState
	I1210 00:58:45.332289  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .DriverName
	I1210 00:58:45.332518  129030 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 00:58:45.332540  129030 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 00:58:45.332559  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHHostname
	I1210 00:58:45.335358  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:58:45.335724  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:36:d7", ip: ""} in network mk-kubernetes-upgrade-481624: {Iface:virbr2 ExpiryTime:2024-12-10 01:57:28 +0000 UTC Type:0 Mac:52:54:00:76:36:d7 Iaid: IPaddr:192.168.50.207 Prefix:24 Hostname:kubernetes-upgrade-481624 Clientid:01:52:54:00:76:36:d7}
	I1210 00:58:45.335753  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | domain kubernetes-upgrade-481624 has defined IP address 192.168.50.207 and MAC address 52:54:00:76:36:d7 in network mk-kubernetes-upgrade-481624
	I1210 00:58:45.335891  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHPort
	I1210 00:58:45.336042  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHKeyPath
	I1210 00:58:45.336168  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .GetSSHUsername
	I1210 00:58:45.336274  129030 sshutil.go:53] new ssh client: &{IP:192.168.50.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/kubernetes-upgrade-481624/id_rsa Username:docker}
	I1210 00:58:45.456343  129030 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:58:45.472640  129030 api_server.go:52] waiting for apiserver process to appear ...
	I1210 00:58:45.472718  129030 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:58:45.487105  129030 api_server.go:72] duration metric: took 209.641017ms to wait for apiserver process to appear ...
	I1210 00:58:45.487131  129030 api_server.go:88] waiting for apiserver healthz status ...
	I1210 00:58:45.487154  129030 api_server.go:253] Checking apiserver healthz at https://192.168.50.207:8443/healthz ...
	I1210 00:58:45.491120  129030 api_server.go:279] https://192.168.50.207:8443/healthz returned 200:
	ok
	I1210 00:58:45.491980  129030 api_server.go:141] control plane version: v1.31.2
	I1210 00:58:45.492000  129030 api_server.go:131] duration metric: took 4.861077ms to wait for apiserver health ...
	I1210 00:58:45.492008  129030 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 00:58:45.498431  129030 system_pods.go:59] 8 kube-system pods found
	I1210 00:58:45.498455  129030 system_pods.go:61] "coredns-7c65d6cfc9-tql8s" [a05ce2a0-8072-4e8e-8986-3645fcccf3d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 00:58:45.498461  129030 system_pods.go:61] "coredns-7c65d6cfc9-xpv54" [adf10e6a-56ac-4968-98cb-bfefe1818033] Running
	I1210 00:58:45.498468  129030 system_pods.go:61] "etcd-kubernetes-upgrade-481624" [fb6323a8-db00-4f59-a6f3-f005ad663000] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 00:58:45.498474  129030 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-481624" [41e94ed0-7986-4705-8112-f38fa86a1cb0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 00:58:45.498484  129030 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-481624" [dde273cf-416c-4812-b9cf-01267dd69fbb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 00:58:45.498492  129030 system_pods.go:61] "kube-proxy-gsztm" [3fd7dab5-b234-45bc-88aa-2d4ade84e60e] Running
	I1210 00:58:45.498497  129030 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-481624" [64704c4e-6a9f-4b10-a42a-114612487ee9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 00:58:45.498502  129030 system_pods.go:61] "storage-provisioner" [290e4ca8-2a66-4523-8ce9-17a2dc5d6054] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 00:58:45.498510  129030 system_pods.go:74] duration metric: took 6.497487ms to wait for pod list to return data ...
	I1210 00:58:45.498522  129030 kubeadm.go:582] duration metric: took 221.063027ms to wait for: map[apiserver:true system_pods:true]
	I1210 00:58:45.498535  129030 node_conditions.go:102] verifying NodePressure condition ...
	I1210 00:58:45.500846  129030 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 00:58:45.500863  129030 node_conditions.go:123] node cpu capacity is 2
	I1210 00:58:45.500873  129030 node_conditions.go:105] duration metric: took 2.333335ms to run NodePressure ...
	I1210 00:58:45.500883  129030 start.go:241] waiting for startup goroutines ...
	I1210 00:58:45.536799  129030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 00:58:45.549180  129030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 00:58:46.223116  129030 main.go:141] libmachine: Making call to close driver server
	I1210 00:58:46.223142  129030 main.go:141] libmachine: Making call to close driver server
	I1210 00:58:46.223159  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .Close
	I1210 00:58:46.223147  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .Close
	I1210 00:58:46.223494  129030 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:58:46.223508  129030 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:58:46.223517  129030 main.go:141] libmachine: Making call to close driver server
	I1210 00:58:46.223524  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .Close
	I1210 00:58:46.223579  129030 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:58:46.223599  129030 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:58:46.223609  129030 main.go:141] libmachine: Making call to close driver server
	I1210 00:58:46.223618  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .Close
	I1210 00:58:46.223723  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | Closing plugin on server side
	I1210 00:58:46.223753  129030 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:58:46.223765  129030 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:58:46.223921  129030 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:58:46.223942  129030 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:58:46.229529  129030 main.go:141] libmachine: Making call to close driver server
	I1210 00:58:46.229554  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .Close
	I1210 00:58:46.229811  129030 main.go:141] libmachine: Successfully made call to close driver server
	I1210 00:58:46.229830  129030 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 00:58:46.229845  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) DBG | Closing plugin on server side
	I1210 00:58:46.231562  129030 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1210 00:58:46.232772  129030 addons.go:510] duration metric: took 955.239203ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1210 00:58:46.232807  129030 start.go:246] waiting for cluster config update ...
	I1210 00:58:46.232822  129030 start.go:255] writing updated cluster config ...
	I1210 00:58:46.233110  129030 ssh_runner.go:195] Run: rm -f paused
	I1210 00:58:46.283595  129030 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 00:58:46.285284  129030 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-481624" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 10 00:58:46 kubernetes-upgrade-481624 crio[2283]: time="2024-12-10 00:58:46.964858742Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733792326964835264,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d5922634-f0bb-414d-b5b4-86e2269de523 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:58:46 kubernetes-upgrade-481624 crio[2283]: time="2024-12-10 00:58:46.965520021Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=622ec222-a6fc-4c37-8844-ad8fcb28bb08 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:58:46 kubernetes-upgrade-481624 crio[2283]: time="2024-12-10 00:58:46.965648914Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=622ec222-a6fc-4c37-8844-ad8fcb28bb08 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:58:46 kubernetes-upgrade-481624 crio[2283]: time="2024-12-10 00:58:46.966047728Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dea77d657889a3172b6e82da0d1aaefa2dfd7e0caff990e5b46233d4db738c34,PodSandboxId:732f18b4bbd6cd0883b1c5fce58325505ca89dc839744b3a2d4fde37fd06acf4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733792324608571680,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 290e4ca8-2a66-4523-8ce9-17a2dc5d6054,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4b7e7d4b34848fc0cc7e6961ad9bb9d61581c1f80ea4e0488714f09d77b088,PodSandboxId:f5ef63e3166a88dc86c6f9f1ecd2c960e00b68dc25d59d0be3193e7db8661b00,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733792324619513293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tql8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a05ce2a0-8072-4e8e-8986-3645fcccf3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b9d9ccd01f3b70fb5d600a4886a8f06ec6c2fd9f8fa07a099741cc68bd9c43,PodSandboxId:d13480feec22024a420a8f7fee8546e8ede7e8cb8c2857f6122a097e46866b6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733792321783383549,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-481624,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: ea796b28e9ebdb099c3e5b5da27161da,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f648cb25f203e5a7263dbe32a33f9254049a1af1a6ae3e46a65577ab9b886550,PodSandboxId:d4de158c238b6e6ebc72ebfe86a536a27eb366ca00a761dacfc54793d7c94112,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733792321771969295,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 35cf37714b20b74da13dc076e642ca7e,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e2a69a56fc215e5719d04b09be0607954e62533f823a302231928f233277520,PodSandboxId:b904daa0c62b5ef3e1f57705ce2070723dbf38c99ef6094c0c689055fa4b0af5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733792318042353080,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b3
a04bbc4983d6f0a54f5d771f52f18,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ece1b985e6bdaa4c42c9c6d239b84ce0aa6208adfb82891f660c1f677eb7e122,PodSandboxId:ea804a705abe42f9df82471f97bc0478b3b43068201ea9cf9604d15dfc5a0ea4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733792317046169918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a4aefdb145601ba0b3211c1171f264e2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7022986108c81caea984393f1c55b60f2cc4fc0a48523cb87edb18422470c658,PodSandboxId:c4b4d3d14c446a347e14c39106e85e83120abf4950229d62b12165274b411eda,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733792313045348989,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gsztm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fd7dab5-b234-45bc
-88aa-2d4ade84e60e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77addb89819de4194aa87a3dadda195ee7ffa22cfdc37b31bc5f8da9ae7f38f1,PodSandboxId:732f18b4bbd6cd0883b1c5fce58325505ca89dc839744b3a2d4fde37fd06acf4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733792311119339197,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 290e4ca8-2a66-4523-8ce9-17a2dc5
d6054,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f068da75044877a0e1d82c5c196e45b366bc1f4b6c4d1e316174c10a6435da,PodSandboxId:5980819201b665d23eac8c022fb3231c2fd1b6693cfd4559dde327c3f9102f72,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733792298888404234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xpv54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adf10e6a-56ac-4968-98cb-bfefe1818033,},Annotations:map[string]
string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f02b4cd9a7fcdfba7ec35a644b881139531b36d9a266183c8acc78203ee72593,PodSandboxId:f5ef63e3166a88dc86c6f9f1ecd2c960e00b68dc25d59d0be3193e7db8661b00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733792298819868234,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tql8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a05ce2a0-8072-4e8e-8986-3645fcccf3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654b3bad85e12ee9b5103545e48c6b7b869a2952392406c52ed2dec509cfe108,PodSandboxId:c4b4d3d14c446a347e14c39106e85e83120abf4950229d62b12165274b411eda,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733792297785855198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gsztm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fd7dab5-b234-45bc-88aa-2d4ade84e60e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9f9d31ee855aa477aa5c15141d6a10e9073b56a9d4bb935878d2035d42456e,PodSandboxId:ea804a705abe42f9df82471f97bc0478b3b43068201ea9cf9604d15dfc5a0ea4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733792297686576909,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4aefdb145601ba0b3211c1171f264e2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83146d4b359076c57397893854f3c6ac13707b14433f9f336a6a939ad0b890b7,PodSandboxId:d13480feec22024a420a8f7fee8546e8ede7e8cb8c2857f6122a097e46866b6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733792297657658948,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea796b28e9ebdb099c3e5b5da27161da,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ed3b025382484b3b1531ada5e9a3bf18b28a37055020bfb7f70d0521c7fec58,PodSandboxId:b904daa0c62b5ef3e1f57705ce2070723dbf38c99ef6094c0c689055fa4b0af5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733792297768029766,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b3a04bbc4983d6f0a54f5d771f52f18,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0fbf7a2c21cc793ca3d86d4dec18863ac065878b0bd24d35fa66a168f8e9bd,PodSandboxId:d4de158c238b6e6ebc72ebfe86a536a27eb366ca00a761dacfc54793d7c94112,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a
96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733792297616094377,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35cf37714b20b74da13dc076e642ca7e,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae67ba6adfb68372b41990e2afd37a55475cc0fc989c2fe107e7704548a91dc0,PodSandboxId:074a89aeb9954c443dcb6652b0f34710e8cf6fed8148aa5debdbb99a6c89450f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733792277676407823,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xpv54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adf10e6a-56ac-4968-98cb-bfefe1818033,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=622ec222-a6fc-4c37-8844-ad8fcb28bb08 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:58:47 kubernetes-upgrade-481624 crio[2283]: time="2024-12-10 00:58:47.025866350Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9e92c2ef-fef5-43f7-9834-9f88678977a9 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:58:47 kubernetes-upgrade-481624 crio[2283]: time="2024-12-10 00:58:47.025964757Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9e92c2ef-fef5-43f7-9834-9f88678977a9 name=/runtime.v1.RuntimeService/Version
	Dec 10 00:58:47 kubernetes-upgrade-481624 crio[2283]: time="2024-12-10 00:58:47.034544062Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c89e4d16-3617-4407-8420-f2e1afedaa0b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:58:47 kubernetes-upgrade-481624 crio[2283]: time="2024-12-10 00:58:47.035414169Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733792327035289456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c89e4d16-3617-4407-8420-f2e1afedaa0b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:58:47 kubernetes-upgrade-481624 crio[2283]: time="2024-12-10 00:58:47.040582582Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa32e0cd-f11e-4fe4-8410-eee820ca3d0d name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:58:47 kubernetes-upgrade-481624 crio[2283]: time="2024-12-10 00:58:47.040671607Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa32e0cd-f11e-4fe4-8410-eee820ca3d0d name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:58:47 kubernetes-upgrade-481624 crio[2283]: time="2024-12-10 00:58:47.040966422Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dea77d657889a3172b6e82da0d1aaefa2dfd7e0caff990e5b46233d4db738c34,PodSandboxId:732f18b4bbd6cd0883b1c5fce58325505ca89dc839744b3a2d4fde37fd06acf4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733792324608571680,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 290e4ca8-2a66-4523-8ce9-17a2dc5d6054,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4b7e7d4b34848fc0cc7e6961ad9bb9d61581c1f80ea4e0488714f09d77b088,PodSandboxId:f5ef63e3166a88dc86c6f9f1ecd2c960e00b68dc25d59d0be3193e7db8661b00,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733792324619513293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tql8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a05ce2a0-8072-4e8e-8986-3645fcccf3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b9d9ccd01f3b70fb5d600a4886a8f06ec6c2fd9f8fa07a099741cc68bd9c43,PodSandboxId:d13480feec22024a420a8f7fee8546e8ede7e8cb8c2857f6122a097e46866b6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733792321783383549,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-481624,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: ea796b28e9ebdb099c3e5b5da27161da,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f648cb25f203e5a7263dbe32a33f9254049a1af1a6ae3e46a65577ab9b886550,PodSandboxId:d4de158c238b6e6ebc72ebfe86a536a27eb366ca00a761dacfc54793d7c94112,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733792321771969295,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 35cf37714b20b74da13dc076e642ca7e,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e2a69a56fc215e5719d04b09be0607954e62533f823a302231928f233277520,PodSandboxId:b904daa0c62b5ef3e1f57705ce2070723dbf38c99ef6094c0c689055fa4b0af5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733792318042353080,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b3
a04bbc4983d6f0a54f5d771f52f18,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ece1b985e6bdaa4c42c9c6d239b84ce0aa6208adfb82891f660c1f677eb7e122,PodSandboxId:ea804a705abe42f9df82471f97bc0478b3b43068201ea9cf9604d15dfc5a0ea4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733792317046169918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a4aefdb145601ba0b3211c1171f264e2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7022986108c81caea984393f1c55b60f2cc4fc0a48523cb87edb18422470c658,PodSandboxId:c4b4d3d14c446a347e14c39106e85e83120abf4950229d62b12165274b411eda,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733792313045348989,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gsztm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fd7dab5-b234-45bc
-88aa-2d4ade84e60e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77addb89819de4194aa87a3dadda195ee7ffa22cfdc37b31bc5f8da9ae7f38f1,PodSandboxId:732f18b4bbd6cd0883b1c5fce58325505ca89dc839744b3a2d4fde37fd06acf4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733792311119339197,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 290e4ca8-2a66-4523-8ce9-17a2dc5
d6054,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f068da75044877a0e1d82c5c196e45b366bc1f4b6c4d1e316174c10a6435da,PodSandboxId:5980819201b665d23eac8c022fb3231c2fd1b6693cfd4559dde327c3f9102f72,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733792298888404234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xpv54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adf10e6a-56ac-4968-98cb-bfefe1818033,},Annotations:map[string]
string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f02b4cd9a7fcdfba7ec35a644b881139531b36d9a266183c8acc78203ee72593,PodSandboxId:f5ef63e3166a88dc86c6f9f1ecd2c960e00b68dc25d59d0be3193e7db8661b00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733792298819868234,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tql8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a05ce2a0-8072-4e8e-8986-3645fcccf3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654b3bad85e12ee9b5103545e48c6b7b869a2952392406c52ed2dec509cfe108,PodSandboxId:c4b4d3d14c446a347e14c39106e85e83120abf4950229d62b12165274b411eda,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733792297785855198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gsztm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fd7dab5-b234-45bc-88aa-2d4ade84e60e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9f9d31ee855aa477aa5c15141d6a10e9073b56a9d4bb935878d2035d42456e,PodSandboxId:ea804a705abe42f9df82471f97bc0478b3b43068201ea9cf9604d15dfc5a0ea4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733792297686576909,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4aefdb145601ba0b3211c1171f264e2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83146d4b359076c57397893854f3c6ac13707b14433f9f336a6a939ad0b890b7,PodSandboxId:d13480feec22024a420a8f7fee8546e8ede7e8cb8c2857f6122a097e46866b6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733792297657658948,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea796b28e9ebdb099c3e5b5da27161da,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ed3b025382484b3b1531ada5e9a3bf18b28a37055020bfb7f70d0521c7fec58,PodSandboxId:b904daa0c62b5ef3e1f57705ce2070723dbf38c99ef6094c0c689055fa4b0af5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733792297768029766,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b3a04bbc4983d6f0a54f5d771f52f18,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0fbf7a2c21cc793ca3d86d4dec18863ac065878b0bd24d35fa66a168f8e9bd,PodSandboxId:d4de158c238b6e6ebc72ebfe86a536a27eb366ca00a761dacfc54793d7c94112,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a
96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733792297616094377,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35cf37714b20b74da13dc076e642ca7e,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae67ba6adfb68372b41990e2afd37a55475cc0fc989c2fe107e7704548a91dc0,PodSandboxId:074a89aeb9954c443dcb6652b0f34710e8cf6fed8148aa5debdbb99a6c89450f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733792277676407823,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xpv54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adf10e6a-56ac-4968-98cb-bfefe1818033,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa32e0cd-f11e-4fe4-8410-eee820ca3d0d name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:58:47 kubernetes-upgrade-481624 crio[2283]: time="2024-12-10 00:58:47.095257597Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f4628419-68ce-494e-8f39-3a3e9108473a name=/runtime.v1.RuntimeService/Version
	Dec 10 00:58:47 kubernetes-upgrade-481624 crio[2283]: time="2024-12-10 00:58:47.095338626Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f4628419-68ce-494e-8f39-3a3e9108473a name=/runtime.v1.RuntimeService/Version
	Dec 10 00:58:47 kubernetes-upgrade-481624 crio[2283]: time="2024-12-10 00:58:47.100937055Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fa27c88a-eba7-4736-9131-e1beec548194 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:58:47 kubernetes-upgrade-481624 crio[2283]: time="2024-12-10 00:58:47.101274535Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733792327101255002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa27c88a-eba7-4736-9131-e1beec548194 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:58:47 kubernetes-upgrade-481624 crio[2283]: time="2024-12-10 00:58:47.102448858Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=268de1d7-2962-451c-b93b-4f444b1400aa name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:58:47 kubernetes-upgrade-481624 crio[2283]: time="2024-12-10 00:58:47.102506922Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=268de1d7-2962-451c-b93b-4f444b1400aa name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:58:47 kubernetes-upgrade-481624 crio[2283]: time="2024-12-10 00:58:47.102908301Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dea77d657889a3172b6e82da0d1aaefa2dfd7e0caff990e5b46233d4db738c34,PodSandboxId:732f18b4bbd6cd0883b1c5fce58325505ca89dc839744b3a2d4fde37fd06acf4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733792324608571680,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 290e4ca8-2a66-4523-8ce9-17a2dc5d6054,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4b7e7d4b34848fc0cc7e6961ad9bb9d61581c1f80ea4e0488714f09d77b088,PodSandboxId:f5ef63e3166a88dc86c6f9f1ecd2c960e00b68dc25d59d0be3193e7db8661b00,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733792324619513293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tql8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a05ce2a0-8072-4e8e-8986-3645fcccf3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b9d9ccd01f3b70fb5d600a4886a8f06ec6c2fd9f8fa07a099741cc68bd9c43,PodSandboxId:d13480feec22024a420a8f7fee8546e8ede7e8cb8c2857f6122a097e46866b6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733792321783383549,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-481624,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: ea796b28e9ebdb099c3e5b5da27161da,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f648cb25f203e5a7263dbe32a33f9254049a1af1a6ae3e46a65577ab9b886550,PodSandboxId:d4de158c238b6e6ebc72ebfe86a536a27eb366ca00a761dacfc54793d7c94112,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733792321771969295,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 35cf37714b20b74da13dc076e642ca7e,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e2a69a56fc215e5719d04b09be0607954e62533f823a302231928f233277520,PodSandboxId:b904daa0c62b5ef3e1f57705ce2070723dbf38c99ef6094c0c689055fa4b0af5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733792318042353080,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b3
a04bbc4983d6f0a54f5d771f52f18,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ece1b985e6bdaa4c42c9c6d239b84ce0aa6208adfb82891f660c1f677eb7e122,PodSandboxId:ea804a705abe42f9df82471f97bc0478b3b43068201ea9cf9604d15dfc5a0ea4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733792317046169918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a4aefdb145601ba0b3211c1171f264e2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7022986108c81caea984393f1c55b60f2cc4fc0a48523cb87edb18422470c658,PodSandboxId:c4b4d3d14c446a347e14c39106e85e83120abf4950229d62b12165274b411eda,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733792313045348989,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gsztm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fd7dab5-b234-45bc
-88aa-2d4ade84e60e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77addb89819de4194aa87a3dadda195ee7ffa22cfdc37b31bc5f8da9ae7f38f1,PodSandboxId:732f18b4bbd6cd0883b1c5fce58325505ca89dc839744b3a2d4fde37fd06acf4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733792311119339197,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 290e4ca8-2a66-4523-8ce9-17a2dc5
d6054,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f068da75044877a0e1d82c5c196e45b366bc1f4b6c4d1e316174c10a6435da,PodSandboxId:5980819201b665d23eac8c022fb3231c2fd1b6693cfd4559dde327c3f9102f72,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733792298888404234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xpv54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adf10e6a-56ac-4968-98cb-bfefe1818033,},Annotations:map[string]
string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f02b4cd9a7fcdfba7ec35a644b881139531b36d9a266183c8acc78203ee72593,PodSandboxId:f5ef63e3166a88dc86c6f9f1ecd2c960e00b68dc25d59d0be3193e7db8661b00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733792298819868234,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tql8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a05ce2a0-8072-4e8e-8986-3645fcccf3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654b3bad85e12ee9b5103545e48c6b7b869a2952392406c52ed2dec509cfe108,PodSandboxId:c4b4d3d14c446a347e14c39106e85e83120abf4950229d62b12165274b411eda,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733792297785855198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gsztm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fd7dab5-b234-45bc-88aa-2d4ade84e60e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9f9d31ee855aa477aa5c15141d6a10e9073b56a9d4bb935878d2035d42456e,PodSandboxId:ea804a705abe42f9df82471f97bc0478b3b43068201ea9cf9604d15dfc5a0ea4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733792297686576909,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4aefdb145601ba0b3211c1171f264e2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83146d4b359076c57397893854f3c6ac13707b14433f9f336a6a939ad0b890b7,PodSandboxId:d13480feec22024a420a8f7fee8546e8ede7e8cb8c2857f6122a097e46866b6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733792297657658948,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea796b28e9ebdb099c3e5b5da27161da,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ed3b025382484b3b1531ada5e9a3bf18b28a37055020bfb7f70d0521c7fec58,PodSandboxId:b904daa0c62b5ef3e1f57705ce2070723dbf38c99ef6094c0c689055fa4b0af5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733792297768029766,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b3a04bbc4983d6f0a54f5d771f52f18,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0fbf7a2c21cc793ca3d86d4dec18863ac065878b0bd24d35fa66a168f8e9bd,PodSandboxId:d4de158c238b6e6ebc72ebfe86a536a27eb366ca00a761dacfc54793d7c94112,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a
96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733792297616094377,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35cf37714b20b74da13dc076e642ca7e,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae67ba6adfb68372b41990e2afd37a55475cc0fc989c2fe107e7704548a91dc0,PodSandboxId:074a89aeb9954c443dcb6652b0f34710e8cf6fed8148aa5debdbb99a6c89450f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733792277676407823,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xpv54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adf10e6a-56ac-4968-98cb-bfefe1818033,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=268de1d7-2962-451c-b93b-4f444b1400aa name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:58:47 kubernetes-upgrade-481624 crio[2283]: time="2024-12-10 00:58:47.156064159Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8cb470b9-9786-4586-b7c8-e1b1fee9863f name=/runtime.v1.RuntimeService/Version
	Dec 10 00:58:47 kubernetes-upgrade-481624 crio[2283]: time="2024-12-10 00:58:47.156135430Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8cb470b9-9786-4586-b7c8-e1b1fee9863f name=/runtime.v1.RuntimeService/Version
	Dec 10 00:58:47 kubernetes-upgrade-481624 crio[2283]: time="2024-12-10 00:58:47.160014656Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=993ccb52-a165-4d34-ab8e-3d1593aaca85 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:58:47 kubernetes-upgrade-481624 crio[2283]: time="2024-12-10 00:58:47.160363434Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733792327160342717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=993ccb52-a165-4d34-ab8e-3d1593aaca85 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 00:58:47 kubernetes-upgrade-481624 crio[2283]: time="2024-12-10 00:58:47.160969717Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=49ea6251-c6a0-468c-a13a-8e669ffa82d9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:58:47 kubernetes-upgrade-481624 crio[2283]: time="2024-12-10 00:58:47.161022113Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=49ea6251-c6a0-468c-a13a-8e669ffa82d9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 00:58:47 kubernetes-upgrade-481624 crio[2283]: time="2024-12-10 00:58:47.161633568Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dea77d657889a3172b6e82da0d1aaefa2dfd7e0caff990e5b46233d4db738c34,PodSandboxId:732f18b4bbd6cd0883b1c5fce58325505ca89dc839744b3a2d4fde37fd06acf4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733792324608571680,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 290e4ca8-2a66-4523-8ce9-17a2dc5d6054,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e4b7e7d4b34848fc0cc7e6961ad9bb9d61581c1f80ea4e0488714f09d77b088,PodSandboxId:f5ef63e3166a88dc86c6f9f1ecd2c960e00b68dc25d59d0be3193e7db8661b00,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733792324619513293,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tql8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a05ce2a0-8072-4e8e-8986-3645fcccf3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4b9d9ccd01f3b70fb5d600a4886a8f06ec6c2fd9f8fa07a099741cc68bd9c43,PodSandboxId:d13480feec22024a420a8f7fee8546e8ede7e8cb8c2857f6122a097e46866b6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733792321783383549,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-481624,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: ea796b28e9ebdb099c3e5b5da27161da,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f648cb25f203e5a7263dbe32a33f9254049a1af1a6ae3e46a65577ab9b886550,PodSandboxId:d4de158c238b6e6ebc72ebfe86a536a27eb366ca00a761dacfc54793d7c94112,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733792321771969295,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 35cf37714b20b74da13dc076e642ca7e,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e2a69a56fc215e5719d04b09be0607954e62533f823a302231928f233277520,PodSandboxId:b904daa0c62b5ef3e1f57705ce2070723dbf38c99ef6094c0c689055fa4b0af5,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733792318042353080,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b3
a04bbc4983d6f0a54f5d771f52f18,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ece1b985e6bdaa4c42c9c6d239b84ce0aa6208adfb82891f660c1f677eb7e122,PodSandboxId:ea804a705abe42f9df82471f97bc0478b3b43068201ea9cf9604d15dfc5a0ea4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733792317046169918,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a4aefdb145601ba0b3211c1171f264e2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7022986108c81caea984393f1c55b60f2cc4fc0a48523cb87edb18422470c658,PodSandboxId:c4b4d3d14c446a347e14c39106e85e83120abf4950229d62b12165274b411eda,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733792313045348989,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gsztm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fd7dab5-b234-45bc
-88aa-2d4ade84e60e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77addb89819de4194aa87a3dadda195ee7ffa22cfdc37b31bc5f8da9ae7f38f1,PodSandboxId:732f18b4bbd6cd0883b1c5fce58325505ca89dc839744b3a2d4fde37fd06acf4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733792311119339197,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 290e4ca8-2a66-4523-8ce9-17a2dc5
d6054,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f068da75044877a0e1d82c5c196e45b366bc1f4b6c4d1e316174c10a6435da,PodSandboxId:5980819201b665d23eac8c022fb3231c2fd1b6693cfd4559dde327c3f9102f72,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733792298888404234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xpv54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adf10e6a-56ac-4968-98cb-bfefe1818033,},Annotations:map[string]
string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f02b4cd9a7fcdfba7ec35a644b881139531b36d9a266183c8acc78203ee72593,PodSandboxId:f5ef63e3166a88dc86c6f9f1ecd2c960e00b68dc25d59d0be3193e7db8661b00,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733792298819868234,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tql8s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a05ce2a0-8072-4e8e-8986-3645fcccf3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654b3bad85e12ee9b5103545e48c6b7b869a2952392406c52ed2dec509cfe108,PodSandboxId:c4b4d3d14c446a347e14c39106e85e83120abf4950229d62b12165274b411eda,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifie
dImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733792297785855198,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gsztm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3fd7dab5-b234-45bc-88aa-2d4ade84e60e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b9f9d31ee855aa477aa5c15141d6a10e9073b56a9d4bb935878d2035d42456e,PodSandboxId:ea804a705abe42f9df82471f97bc0478b3b43068201ea9cf9604d15dfc5a0ea4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733792297686576909,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4aefdb145601ba0b3211c1171f264e2,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83146d4b359076c57397893854f3c6ac13707b14433f9f336a6a939ad0b890b7,PodSandboxId:d13480feec22024a420a8f7fee8546e8ede7e8cb8c2857f6122a097e46866b6c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733792297657658948,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea796b28e9ebdb099c3e5b5da27161da,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ed3b025382484b3b1531ada5e9a3bf18b28a37055020bfb7f70d0521c7fec58,PodSandboxId:b904daa0c62b5ef3e1f57705ce2070723dbf38c99ef6094c0c689055fa4b0af5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733792297768029766,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b3a04bbc4983d6f0a54f5d771f52f18,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d0fbf7a2c21cc793ca3d86d4dec18863ac065878b0bd24d35fa66a168f8e9bd,PodSandboxId:d4de158c238b6e6ebc72ebfe86a536a27eb366ca00a761dacfc54793d7c94112,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a
96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733792297616094377,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-481624,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35cf37714b20b74da13dc076e642ca7e,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae67ba6adfb68372b41990e2afd37a55475cc0fc989c2fe107e7704548a91dc0,PodSandboxId:074a89aeb9954c443dcb6652b0f34710e8cf6fed8148aa5debdbb99a6c89450f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733792277676407823,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xpv54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adf10e6a-56ac-4968-98cb-bfefe1818033,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=49ea6251-c6a0-468c-a13a-8e669ffa82d9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6e4b7e7d4b348       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   2 seconds ago       Running             coredns                   2                   f5ef63e3166a8       coredns-7c65d6cfc9-tql8s
	dea77d657889a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   2 seconds ago       Running             storage-provisioner       2                   732f18b4bbd6c       storage-provisioner
	c4b9d9ccd01f3       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   5 seconds ago       Running             kube-scheduler            2                   d13480feec220       kube-scheduler-kubernetes-upgrade-481624
	f648cb25f203e       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   5 seconds ago       Running             kube-apiserver            2                   d4de158c238b6       kube-apiserver-kubernetes-upgrade-481624
	5e2a69a56fc21       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 seconds ago       Running             etcd                      2                   b904daa0c62b5       etcd-kubernetes-upgrade-481624
	ece1b985e6bda       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   10 seconds ago      Running             kube-controller-manager   2                   ea804a705abe4       kube-controller-manager-kubernetes-upgrade-481624
	7022986108c81       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   14 seconds ago      Running             kube-proxy                2                   c4b4d3d14c446       kube-proxy-gsztm
	77addb89819de       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Exited              storage-provisioner       1                   732f18b4bbd6c       storage-provisioner
	55f068da75044       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   28 seconds ago      Running             coredns                   1                   5980819201b66       coredns-7c65d6cfc9-xpv54
	f02b4cd9a7fcd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   28 seconds ago      Exited              coredns                   1                   f5ef63e3166a8       coredns-7c65d6cfc9-tql8s
	654b3bad85e12       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   29 seconds ago      Exited              kube-proxy                1                   c4b4d3d14c446       kube-proxy-gsztm
	2ed3b02538248       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   29 seconds ago      Exited              etcd                      1                   b904daa0c62b5       etcd-kubernetes-upgrade-481624
	8b9f9d31ee855       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   29 seconds ago      Exited              kube-controller-manager   1                   ea804a705abe4       kube-controller-manager-kubernetes-upgrade-481624
	83146d4b35907       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   29 seconds ago      Exited              kube-scheduler            1                   d13480feec220       kube-scheduler-kubernetes-upgrade-481624
	6d0fbf7a2c21c       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   29 seconds ago      Exited              kube-apiserver            1                   d4de158c238b6       kube-apiserver-kubernetes-upgrade-481624
	ae67ba6adfb68       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   49 seconds ago      Exited              coredns                   0                   074a89aeb9954       coredns-7c65d6cfc9-xpv54
	
	
	==> coredns [55f068da75044877a0e1d82c5c196e45b366bc1f4b6c4d1e316174c10a6435da] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[368828305]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (10-Dec-2024 00:58:19.302) (total time: 10002ms):
	Trace[368828305]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (00:58:29.304)
	Trace[368828305]: [10.002558542s] [10.002558542s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[878330628]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (10-Dec-2024 00:58:19.302) (total time: 10001ms):
	Trace[878330628]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:58:29.304)
	Trace[878330628]: [10.001952572s] [10.001952572s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:51822->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:51820->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1629844948]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (10-Dec-2024 00:58:30.136) (total time: 10540ms):
	Trace[1629844948]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:51820->10.96.0.1:443: read: connection reset by peer 10540ms (00:58:40.676)
	Trace[1629844948]: [10.54034983s] [10.54034983s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:51820->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[57595942]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (10-Dec-2024 00:58:30.454) (total time: 10221ms):
	Trace[57595942]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:51822->10.96.0.1:443: read: connection reset by peer 10221ms (00:58:40.676)
	Trace[57595942]: [10.221589468s] [10.221589468s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:51822->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:51832->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:51832->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	
	
	==> coredns [6e4b7e7d4b34848fc0cc7e6961ad9bb9d61581c1f80ea4e0488714f09d77b088] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [ae67ba6adfb68372b41990e2afd37a55475cc0fc989c2fe107e7704548a91dc0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f02b4cd9a7fcdfba7ec35a644b881139531b36d9a266183c8acc78203ee72593] <==
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-481624
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-481624
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:57:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-481624
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:58:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 00:58:43 +0000   Tue, 10 Dec 2024 00:57:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 00:58:43 +0000   Tue, 10 Dec 2024 00:57:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 00:58:43 +0000   Tue, 10 Dec 2024 00:57:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 00:58:43 +0000   Tue, 10 Dec 2024 00:57:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.207
	  Hostname:    kubernetes-upgrade-481624
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c806daf6ea8d45e68c74ff50d7edf57f
	  System UUID:                c806daf6-ea8d-45e6-8c74-ff50d7edf57f
	  Boot ID:                    02b40155-0aeb-4cbf-a489-ecea68eaa283
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-tql8s                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     51s
	  kube-system                 coredns-7c65d6cfc9-xpv54                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     51s
	  kube-system                 etcd-kubernetes-upgrade-481624                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         56s
	  kube-system                 kube-apiserver-kubernetes-upgrade-481624             250m (12%)    0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-481624    200m (10%)    0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kube-proxy-gsztm                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 kube-scheduler-kubernetes-upgrade-481624             100m (5%)     0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 50s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeHasNoDiskPressure    61s (x8 over 62s)  kubelet          Node kubernetes-upgrade-481624 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x7 over 62s)  kubelet          Node kubernetes-upgrade-481624 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  61s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  61s (x8 over 62s)  kubelet          Node kubernetes-upgrade-481624 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           51s                node-controller  Node kubernetes-upgrade-481624 event: Registered Node kubernetes-upgrade-481624 in Controller
	  Normal  Starting                 6s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6s (x8 over 6s)    kubelet          Node kubernetes-upgrade-481624 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6s (x8 over 6s)    kubelet          Node kubernetes-upgrade-481624 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6s (x7 over 6s)    kubelet          Node kubernetes-upgrade-481624 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           0s                 node-controller  Node kubernetes-upgrade-481624 event: Registered Node kubernetes-upgrade-481624 in Controller
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.988323] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.070598] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055293] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.197247] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.122004] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.259539] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +3.873098] systemd-fstab-generator[720]: Ignoring "noauto" option for root device
	[  +2.186626] systemd-fstab-generator[841]: Ignoring "noauto" option for root device
	[  +0.058354] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.724680] systemd-fstab-generator[1229]: Ignoring "noauto" option for root device
	[  +0.079570] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.001949] kauditd_printk_skb: 66 callbacks suppressed
	[Dec10 00:58] systemd-fstab-generator[2209]: Ignoring "noauto" option for root device
	[  +0.082983] kauditd_printk_skb: 33 callbacks suppressed
	[  +0.053495] systemd-fstab-generator[2221]: Ignoring "noauto" option for root device
	[  +0.180002] systemd-fstab-generator[2235]: Ignoring "noauto" option for root device
	[  +0.153519] systemd-fstab-generator[2247]: Ignoring "noauto" option for root device
	[  +0.296606] systemd-fstab-generator[2275]: Ignoring "noauto" option for root device
	[  +1.898571] systemd-fstab-generator[2428]: Ignoring "noauto" option for root device
	[  +2.502822] kauditd_printk_skb: 218 callbacks suppressed
	[ +17.610139] kauditd_printk_skb: 6 callbacks suppressed
	[  +4.012189] systemd-fstab-generator[3599]: Ignoring "noauto" option for root device
	[  +2.981891] kauditd_printk_skb: 41 callbacks suppressed
	[  +1.249441] systemd-fstab-generator[3971]: Ignoring "noauto" option for root device
	
	
	==> etcd [2ed3b025382484b3b1531ada5e9a3bf18b28a37055020bfb7f70d0521c7fec58] <==
	{"level":"info","ts":"2024-12-10T00:58:18.162505Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-12-10T00:58:18.174161Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"be0816d0cb8232ac","local-member-id":"5631c7033d9ffc08","commit-index":397}
	{"level":"info","ts":"2024-12-10T00:58:18.174226Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5631c7033d9ffc08 switched to configuration voters=()"}
	{"level":"info","ts":"2024-12-10T00:58:18.174260Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5631c7033d9ffc08 became follower at term 2"}
	{"level":"info","ts":"2024-12-10T00:58:18.174275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 5631c7033d9ffc08 [peers: [], term: 2, commit: 397, applied: 0, lastindex: 397, lastterm: 2]"}
	{"level":"warn","ts":"2024-12-10T00:58:18.176411Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-12-10T00:58:18.185039Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":386}
	{"level":"info","ts":"2024-12-10T00:58:18.193730Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-12-10T00:58:18.197318Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"5631c7033d9ffc08","timeout":"7s"}
	{"level":"info","ts":"2024-12-10T00:58:18.197545Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"5631c7033d9ffc08"}
	{"level":"info","ts":"2024-12-10T00:58:18.197575Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"5631c7033d9ffc08","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-12-10T00:58:18.198138Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-10T00:58:18.203855Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-12-10T00:58:18.203963Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-12-10T00:58:18.203982Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-12-10T00:58:18.203993Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-12-10T00:58:18.204225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5631c7033d9ffc08 switched to configuration voters=(6210964177853348872)"}
	{"level":"info","ts":"2024-12-10T00:58:18.204284Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"be0816d0cb8232ac","local-member-id":"5631c7033d9ffc08","added-peer-id":"5631c7033d9ffc08","added-peer-peer-urls":["https://192.168.50.207:2380"]}
	{"level":"info","ts":"2024-12-10T00:58:18.204367Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"be0816d0cb8232ac","local-member-id":"5631c7033d9ffc08","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T00:58:18.204391Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T00:58:18.209067Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-10T00:58:18.209253Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"5631c7033d9ffc08","initial-advertise-peer-urls":["https://192.168.50.207:2380"],"listen-peer-urls":["https://192.168.50.207:2380"],"advertise-client-urls":["https://192.168.50.207:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.207:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-10T00:58:18.209272Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-10T00:58:18.209335Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.207:2380"}
	{"level":"info","ts":"2024-12-10T00:58:18.209343Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.207:2380"}
	
	
	==> etcd [5e2a69a56fc215e5719d04b09be0607954e62533f823a302231928f233277520] <==
	{"level":"info","ts":"2024-12-10T00:58:38.176967Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T00:58:38.179345Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-10T00:58:38.179765Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"5631c7033d9ffc08","initial-advertise-peer-urls":["https://192.168.50.207:2380"],"listen-peer-urls":["https://192.168.50.207:2380"],"advertise-client-urls":["https://192.168.50.207:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.207:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-10T00:58:38.179827Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-10T00:58:38.176306Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-12-10T00:58:38.179944Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-12-10T00:58:38.179989Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-12-10T00:58:38.180131Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.207:2380"}
	{"level":"info","ts":"2024-12-10T00:58:38.180160Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.207:2380"}
	{"level":"info","ts":"2024-12-10T00:58:39.765947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5631c7033d9ffc08 is starting a new election at term 2"}
	{"level":"info","ts":"2024-12-10T00:58:39.765980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5631c7033d9ffc08 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-10T00:58:39.766006Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5631c7033d9ffc08 received MsgPreVoteResp from 5631c7033d9ffc08 at term 2"}
	{"level":"info","ts":"2024-12-10T00:58:39.766019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5631c7033d9ffc08 became candidate at term 3"}
	{"level":"info","ts":"2024-12-10T00:58:39.766025Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5631c7033d9ffc08 received MsgVoteResp from 5631c7033d9ffc08 at term 3"}
	{"level":"info","ts":"2024-12-10T00:58:39.766033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5631c7033d9ffc08 became leader at term 3"}
	{"level":"info","ts":"2024-12-10T00:58:39.766040Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5631c7033d9ffc08 elected leader 5631c7033d9ffc08 at term 3"}
	{"level":"info","ts":"2024-12-10T00:58:39.767865Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-10T00:58:39.768064Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-10T00:58:39.767865Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"5631c7033d9ffc08","local-member-attributes":"{Name:kubernetes-upgrade-481624 ClientURLs:[https://192.168.50.207:2379]}","request-path":"/0/members/5631c7033d9ffc08/attributes","cluster-id":"be0816d0cb8232ac","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-10T00:58:39.768364Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-10T00:58:39.768377Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-10T00:58:39.768915Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-10T00:58:39.768958Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-10T00:58:39.769785Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.207:2379"}
	{"level":"info","ts":"2024-12-10T00:58:39.770503Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 00:58:47 up 1 min,  0 users,  load average: 0.72, 0.21, 0.07
	Linux kubernetes-upgrade-481624 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6d0fbf7a2c21cc793ca3d86d4dec18863ac065878b0bd24d35fa66a168f8e9bd] <==
	I1210 00:58:18.114144       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 00:58:19.658173       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W1210 00:58:19.658777       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:58:19.658936       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1210 00:58:19.667658       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1210 00:58:19.669135       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1210 00:58:19.669149       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1210 00:58:19.669305       1 instance.go:232] Using reconciler: lease
	W1210 00:58:19.670063       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:58:20.659177       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:58:20.659254       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:58:20.671217       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:58:22.057298       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:58:22.120355       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:58:22.394806       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:58:24.353845       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:58:24.920421       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:58:25.226362       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:58:28.620226       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:58:28.944047       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:58:29.637756       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:58:35.468757       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:58:35.560479       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 00:58:35.782868       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1210 00:58:39.670213       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [f648cb25f203e5a7263dbe32a33f9254049a1af1a6ae3e46a65577ab9b886550] <==
	I1210 00:58:43.626563       1 aggregator.go:171] initial CRD sync complete...
	I1210 00:58:43.626649       1 autoregister_controller.go:144] Starting autoregister controller
	I1210 00:58:43.626657       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1210 00:58:43.666896       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1210 00:58:43.675193       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1210 00:58:43.675239       1 policy_source.go:224] refreshing policies
	I1210 00:58:43.688698       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1210 00:58:43.689058       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1210 00:58:43.690465       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1210 00:58:43.693052       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1210 00:58:43.693462       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1210 00:58:43.693587       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1210 00:58:43.693777       1 shared_informer.go:320] Caches are synced for configmaps
	I1210 00:58:43.705434       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1210 00:58:43.709645       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E1210 00:58:43.715119       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1210 00:58:43.734295       1 cache.go:39] Caches are synced for autoregister controller
	I1210 00:58:44.493337       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 00:58:44.771947       1 controller.go:615] quota admission added evaluator for: endpoints
	I1210 00:58:45.046952       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1210 00:58:45.056721       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1210 00:58:45.091651       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1210 00:58:45.232788       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 00:58:45.242243       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 00:58:47.347360       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8b9f9d31ee855aa477aa5c15141d6a10e9073b56a9d4bb935878d2035d42456e] <==
	
	
	==> kube-controller-manager [ece1b985e6bdaa4c42c9c6d239b84ce0aa6208adfb82891f660c1f677eb7e122] <==
	I1210 00:58:47.062545       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1210 00:58:47.063202       1 shared_informer.go:320] Caches are synced for persistent volume
	I1210 00:58:47.065100       1 shared_informer.go:320] Caches are synced for endpoint
	I1210 00:58:47.068400       1 shared_informer.go:320] Caches are synced for job
	I1210 00:58:47.084914       1 shared_informer.go:320] Caches are synced for crt configmap
	I1210 00:58:47.085059       1 shared_informer.go:320] Caches are synced for daemon sets
	I1210 00:58:47.085355       1 shared_informer.go:320] Caches are synced for GC
	I1210 00:58:47.086128       1 shared_informer.go:320] Caches are synced for expand
	I1210 00:58:47.089285       1 shared_informer.go:320] Caches are synced for stateful set
	I1210 00:58:47.118963       1 shared_informer.go:320] Caches are synced for HPA
	I1210 00:58:47.135425       1 shared_informer.go:320] Caches are synced for disruption
	I1210 00:58:47.168476       1 shared_informer.go:320] Caches are synced for resource quota
	I1210 00:58:47.191842       1 shared_informer.go:320] Caches are synced for resource quota
	I1210 00:58:47.236827       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1210 00:58:47.236925       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1210 00:58:47.238935       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1210 00:58:47.239007       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1210 00:58:47.239052       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1210 00:58:47.315236       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="266.813624ms"
	I1210 00:58:47.316132       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="63.15µs"
	I1210 00:58:47.651383       1 shared_informer.go:320] Caches are synced for garbage collector
	I1210 00:58:47.694658       1 shared_informer.go:320] Caches are synced for garbage collector
	I1210 00:58:47.694693       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1210 00:58:47.798202       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="18.781703ms"
	I1210 00:58:47.800781       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="1.187713ms"
	
	
	==> kube-proxy [654b3bad85e12ee9b5103545e48c6b7b869a2952392406c52ed2dec509cfe108] <==
	
	
	==> kube-proxy [7022986108c81caea984393f1c55b60f2cc4fc0a48523cb87edb18422470c658] <==
	 >
	E1210 00:58:33.199844       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1210 00:58:40.677772       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-481624\": dial tcp 192.168.50.207:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.50.207:51766->192.168.50.207:8443: read: connection reset by peer"
	E1210 00:58:41.733172       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-481624\": dial tcp 192.168.50.207:8443: connect: connection refused"
	I1210 00:58:43.896073       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.207"]
	E1210 00:58:43.896313       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 00:58:43.927961       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1210 00:58:43.927992       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 00:58:43.928013       1 server_linux.go:169] "Using iptables Proxier"
	I1210 00:58:43.930754       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 00:58:43.931128       1 server.go:483] "Version info" version="v1.31.2"
	I1210 00:58:43.931211       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 00:58:43.932898       1 config.go:199] "Starting service config controller"
	I1210 00:58:43.932967       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1210 00:58:43.933038       1 config.go:105] "Starting endpoint slice config controller"
	I1210 00:58:43.933079       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1210 00:58:43.933421       1 config.go:328] "Starting node config controller"
	I1210 00:58:43.933479       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1210 00:58:44.033422       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1210 00:58:44.033424       1 shared_informer.go:320] Caches are synced for service config
	I1210 00:58:44.033557       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [83146d4b359076c57397893854f3c6ac13707b14433f9f336a6a939ad0b890b7] <==
	I1210 00:58:19.720440       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [c4b9d9ccd01f3b70fb5d600a4886a8f06ec6c2fd9f8fa07a099741cc68bd9c43] <==
	W1210 00:58:43.639892       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E1210 00:58:43.639955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError"
	W1210 00:58:43.640090       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E1210 00:58:43.640154       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError"
	W1210 00:58:43.640293       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1210 00:58:43.640370       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError"
	W1210 00:58:43.640526       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E1210 00:58:43.640702       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError"
	W1210 00:58:43.640825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E1210 00:58:43.640880       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError"
	W1210 00:58:43.641059       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E1210 00:58:43.641128       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError"
	W1210 00:58:43.641250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E1210 00:58:43.641308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError"
	W1210 00:58:43.641413       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E1210 00:58:43.643678       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError"
	W1210 00:58:43.643786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	E1210 00:58:43.643823       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError"
	W1210 00:58:43.643860       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E1210 00:58:43.643886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError"
	W1210 00:58:43.644653       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E1210 00:58:43.644702       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError"
	W1210 00:58:43.644785       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1210 00:58:43.644818       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError"
	I1210 00:58:43.731133       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 10 00:58:41 kubernetes-upgrade-481624 kubelet[3606]: I1210 00:58:41.503629    3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35cf37714b20b74da13dc076e642ca7e-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-481624\" (UID: \"35cf37714b20b74da13dc076e642ca7e\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-481624"
	Dec 10 00:58:41 kubernetes-upgrade-481624 kubelet[3606]: I1210 00:58:41.503653    3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a4aefdb145601ba0b3211c1171f264e2-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-481624\" (UID: \"a4aefdb145601ba0b3211c1171f264e2\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-481624"
	Dec 10 00:58:41 kubernetes-upgrade-481624 kubelet[3606]: I1210 00:58:41.503676    3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a4aefdb145601ba0b3211c1171f264e2-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-481624\" (UID: \"a4aefdb145601ba0b3211c1171f264e2\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-481624"
	Dec 10 00:58:41 kubernetes-upgrade-481624 kubelet[3606]: I1210 00:58:41.503706    3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a4aefdb145601ba0b3211c1171f264e2-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-481624\" (UID: \"a4aefdb145601ba0b3211c1171f264e2\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-481624"
	Dec 10 00:58:41 kubernetes-upgrade-481624 kubelet[3606]: I1210 00:58:41.503721    3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a4aefdb145601ba0b3211c1171f264e2-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-481624\" (UID: \"a4aefdb145601ba0b3211c1171f264e2\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-481624"
	Dec 10 00:58:41 kubernetes-upgrade-481624 kubelet[3606]: E1210 00:58:41.503961    3606 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-481624?timeout=10s\": dial tcp 192.168.50.207:8443: connect: connection refused" interval="400ms"
	Dec 10 00:58:41 kubernetes-upgrade-481624 kubelet[3606]: I1210 00:58:41.692222    3606 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-481624"
	Dec 10 00:58:41 kubernetes-upgrade-481624 kubelet[3606]: E1210 00:58:41.693119    3606 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.207:8443: connect: connection refused" node="kubernetes-upgrade-481624"
	Dec 10 00:58:41 kubernetes-upgrade-481624 kubelet[3606]: I1210 00:58:41.762159    3606 scope.go:117] "RemoveContainer" containerID="6d0fbf7a2c21cc793ca3d86d4dec18863ac065878b0bd24d35fa66a168f8e9bd"
	Dec 10 00:58:41 kubernetes-upgrade-481624 kubelet[3606]: I1210 00:58:41.763716    3606 scope.go:117] "RemoveContainer" containerID="83146d4b359076c57397893854f3c6ac13707b14433f9f336a6a939ad0b890b7"
	Dec 10 00:58:41 kubernetes-upgrade-481624 kubelet[3606]: E1210 00:58:41.905889    3606 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-481624?timeout=10s\": dial tcp 192.168.50.207:8443: connect: connection refused" interval="800ms"
	Dec 10 00:58:42 kubernetes-upgrade-481624 kubelet[3606]: I1210 00:58:42.095086    3606 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-481624"
	Dec 10 00:58:43 kubernetes-upgrade-481624 kubelet[3606]: I1210 00:58:43.688252    3606 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-481624"
	Dec 10 00:58:43 kubernetes-upgrade-481624 kubelet[3606]: I1210 00:58:43.688339    3606 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-481624"
	Dec 10 00:58:43 kubernetes-upgrade-481624 kubelet[3606]: I1210 00:58:43.688363    3606 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 10 00:58:43 kubernetes-upgrade-481624 kubelet[3606]: I1210 00:58:43.689430    3606 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 10 00:58:44 kubernetes-upgrade-481624 kubelet[3606]: I1210 00:58:44.286064    3606 apiserver.go:52] "Watching apiserver"
	Dec 10 00:58:44 kubernetes-upgrade-481624 kubelet[3606]: I1210 00:58:44.302744    3606 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 10 00:58:44 kubernetes-upgrade-481624 kubelet[3606]: I1210 00:58:44.313568    3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/290e4ca8-2a66-4523-8ce9-17a2dc5d6054-tmp\") pod \"storage-provisioner\" (UID: \"290e4ca8-2a66-4523-8ce9-17a2dc5d6054\") " pod="kube-system/storage-provisioner"
	Dec 10 00:58:44 kubernetes-upgrade-481624 kubelet[3606]: I1210 00:58:44.313802    3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fd7dab5-b234-45bc-88aa-2d4ade84e60e-xtables-lock\") pod \"kube-proxy-gsztm\" (UID: \"3fd7dab5-b234-45bc-88aa-2d4ade84e60e\") " pod="kube-system/kube-proxy-gsztm"
	Dec 10 00:58:44 kubernetes-upgrade-481624 kubelet[3606]: I1210 00:58:44.314064    3606 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fd7dab5-b234-45bc-88aa-2d4ade84e60e-lib-modules\") pod \"kube-proxy-gsztm\" (UID: \"3fd7dab5-b234-45bc-88aa-2d4ade84e60e\") " pod="kube-system/kube-proxy-gsztm"
	Dec 10 00:58:44 kubernetes-upgrade-481624 kubelet[3606]: E1210 00:58:44.485967    3606 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-481624\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-481624"
	Dec 10 00:58:44 kubernetes-upgrade-481624 kubelet[3606]: I1210 00:58:44.591849    3606 scope.go:117] "RemoveContainer" containerID="77addb89819de4194aa87a3dadda195ee7ffa22cfdc37b31bc5f8da9ae7f38f1"
	Dec 10 00:58:44 kubernetes-upgrade-481624 kubelet[3606]: I1210 00:58:44.592058    3606 scope.go:117] "RemoveContainer" containerID="f02b4cd9a7fcdfba7ec35a644b881139531b36d9a266183c8acc78203ee72593"
	Dec 10 00:58:47 kubernetes-upgrade-481624 kubelet[3606]: I1210 00:58:47.764070    3606 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [77addb89819de4194aa87a3dadda195ee7ffa22cfdc37b31bc5f8da9ae7f38f1] <==
	I1210 00:58:31.184216       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 00:58:40.677433       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [dea77d657889a3172b6e82da0d1aaefa2dfd7e0caff990e5b46233d4db738c34] <==
	I1210 00:58:44.747672       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 00:58:44.762879       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 00:58:44.762978       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1210 00:58:44.777864       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 00:58:44.778173       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-481624_e601d2be-090b-420d-83cb-495a02c0b1ec!
	I1210 00:58:44.779307       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"353c88fc-14c9-4b97-a4c7-92f705875a7d", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-481624_e601d2be-090b-420d-83cb-495a02c0b1ec became leader
	I1210 00:58:44.879363       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-481624_e601d2be-090b-420d-83cb-495a02c0b1ec!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-481624 -n kubernetes-upgrade-481624
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-481624 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-481624" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-481624
--- FAIL: TestKubernetesUpgrade (391.14s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (53.94s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-190222 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-190222 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (49.970768828s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-190222] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-190222" primary control-plane node in "pause-190222" cluster
	* Updating the running kvm2 "pause-190222" VM ...
	* Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-190222" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 00:57:30.881142  128534 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:57:30.881714  128534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:57:30.881728  128534 out.go:358] Setting ErrFile to fd 2...
	I1210 00:57:30.881736  128534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:57:30.882166  128534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 00:57:30.883024  128534 out.go:352] Setting JSON to false
	I1210 00:57:30.884581  128534 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":9602,"bootTime":1733782649,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:57:30.884695  128534 start.go:139] virtualization: kvm guest
	I1210 00:57:30.886613  128534 out.go:177] * [pause-190222] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 00:57:30.888021  128534 notify.go:220] Checking for updates...
	I1210 00:57:30.888030  128534 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 00:57:30.889376  128534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:57:30.890522  128534 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:57:30.891545  128534 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:57:30.892689  128534 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:57:30.893689  128534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:57:30.895221  128534 config.go:182] Loaded profile config "pause-190222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:57:30.895828  128534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:57:30.895896  128534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:57:30.912124  128534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41703
	I1210 00:57:30.912778  128534 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:57:30.913328  128534 main.go:141] libmachine: Using API Version  1
	I1210 00:57:30.913348  128534 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:57:30.913701  128534 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:57:30.913895  128534 main.go:141] libmachine: (pause-190222) Calling .DriverName
	I1210 00:57:30.914157  128534 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:57:30.914460  128534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:57:30.914504  128534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:57:30.930613  128534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35683
	I1210 00:57:30.930957  128534 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:57:30.931471  128534 main.go:141] libmachine: Using API Version  1
	I1210 00:57:30.931497  128534 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:57:30.931822  128534 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:57:30.931989  128534 main.go:141] libmachine: (pause-190222) Calling .DriverName
	I1210 00:57:30.970157  128534 out.go:177] * Using the kvm2 driver based on existing profile
	I1210 00:57:30.971514  128534 start.go:297] selected driver: kvm2
	I1210 00:57:30.971530  128534 start.go:901] validating driver "kvm2" against &{Name:pause-190222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.2 ClusterName:pause-190222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.16 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:57:30.971716  128534 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:57:30.972148  128534 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:57:30.972251  128534 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 00:57:30.986443  128534 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1210 00:57:30.987146  128534 cni.go:84] Creating CNI manager for ""
	I1210 00:57:30.987194  128534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:57:30.987251  128534 start.go:340] cluster config:
	{Name:pause-190222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:pause-190222 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.16 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false
registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:57:30.987393  128534 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:57:30.988924  128534 out.go:177] * Starting "pause-190222" primary control-plane node in "pause-190222" cluster
	I1210 00:57:30.989981  128534 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:57:30.990030  128534 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1210 00:57:30.990039  128534 cache.go:56] Caching tarball of preloaded images
	I1210 00:57:30.990125  128534 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 00:57:30.990135  128534 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 00:57:30.990239  128534 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/pause-190222/config.json ...
	I1210 00:57:30.990412  128534 start.go:360] acquireMachinesLock for pause-190222: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:57:38.626925  128534 start.go:364] duration metric: took 7.636466452s to acquireMachinesLock for "pause-190222"
	I1210 00:57:38.626968  128534 start.go:96] Skipping create...Using existing machine configuration
	I1210 00:57:38.626976  128534 fix.go:54] fixHost starting: 
	I1210 00:57:38.627342  128534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:57:38.627390  128534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:57:38.647955  128534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34399
	I1210 00:57:38.648497  128534 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:57:38.649162  128534 main.go:141] libmachine: Using API Version  1
	I1210 00:57:38.649197  128534 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:57:38.649603  128534 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:57:38.649761  128534 main.go:141] libmachine: (pause-190222) Calling .DriverName
	I1210 00:57:38.649893  128534 main.go:141] libmachine: (pause-190222) Calling .GetState
	I1210 00:57:38.651738  128534 fix.go:112] recreateIfNeeded on pause-190222: state=Running err=<nil>
	W1210 00:57:38.651759  128534 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 00:57:38.653607  128534 out.go:177] * Updating the running kvm2 "pause-190222" VM ...
	I1210 00:57:38.654725  128534 machine.go:93] provisionDockerMachine start ...
	I1210 00:57:38.654745  128534 main.go:141] libmachine: (pause-190222) Calling .DriverName
	I1210 00:57:38.654950  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHHostname
	I1210 00:57:38.657489  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:38.657938  128534 main.go:141] libmachine: (pause-190222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:e0:4c", ip: ""} in network mk-pause-190222: {Iface:virbr3 ExpiryTime:2024-12-10 01:56:19 +0000 UTC Type:0 Mac:52:54:00:88:e0:4c Iaid: IPaddr:192.168.61.16 Prefix:24 Hostname:pause-190222 Clientid:01:52:54:00:88:e0:4c}
	I1210 00:57:38.657965  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined IP address 192.168.61.16 and MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:38.658095  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHPort
	I1210 00:57:38.658267  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHKeyPath
	I1210 00:57:38.658438  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHKeyPath
	I1210 00:57:38.658607  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHUsername
	I1210 00:57:38.658850  128534 main.go:141] libmachine: Using SSH client type: native
	I1210 00:57:38.659042  128534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.16 22 <nil> <nil>}
	I1210 00:57:38.659055  128534 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 00:57:38.771204  128534 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-190222
	
	I1210 00:57:38.771241  128534 main.go:141] libmachine: (pause-190222) Calling .GetMachineName
	I1210 00:57:38.771526  128534 buildroot.go:166] provisioning hostname "pause-190222"
	I1210 00:57:38.771562  128534 main.go:141] libmachine: (pause-190222) Calling .GetMachineName
	I1210 00:57:38.771738  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHHostname
	I1210 00:57:38.774948  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:38.775409  128534 main.go:141] libmachine: (pause-190222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:e0:4c", ip: ""} in network mk-pause-190222: {Iface:virbr3 ExpiryTime:2024-12-10 01:56:19 +0000 UTC Type:0 Mac:52:54:00:88:e0:4c Iaid: IPaddr:192.168.61.16 Prefix:24 Hostname:pause-190222 Clientid:01:52:54:00:88:e0:4c}
	I1210 00:57:38.775449  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined IP address 192.168.61.16 and MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:38.775576  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHPort
	I1210 00:57:38.775737  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHKeyPath
	I1210 00:57:38.775845  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHKeyPath
	I1210 00:57:38.775992  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHUsername
	I1210 00:57:38.776190  128534 main.go:141] libmachine: Using SSH client type: native
	I1210 00:57:38.776423  128534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.16 22 <nil> <nil>}
	I1210 00:57:38.776437  128534 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-190222 && echo "pause-190222" | sudo tee /etc/hostname
	I1210 00:57:38.908912  128534 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-190222
	
	I1210 00:57:38.908944  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHHostname
	I1210 00:57:38.911874  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:38.912187  128534 main.go:141] libmachine: (pause-190222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:e0:4c", ip: ""} in network mk-pause-190222: {Iface:virbr3 ExpiryTime:2024-12-10 01:56:19 +0000 UTC Type:0 Mac:52:54:00:88:e0:4c Iaid: IPaddr:192.168.61.16 Prefix:24 Hostname:pause-190222 Clientid:01:52:54:00:88:e0:4c}
	I1210 00:57:38.912235  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined IP address 192.168.61.16 and MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:38.912410  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHPort
	I1210 00:57:38.912607  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHKeyPath
	I1210 00:57:38.912763  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHKeyPath
	I1210 00:57:38.912885  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHUsername
	I1210 00:57:38.913051  128534 main.go:141] libmachine: Using SSH client type: native
	I1210 00:57:38.913209  128534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.16 22 <nil> <nil>}
	I1210 00:57:38.913224  128534 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-190222' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-190222/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-190222' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 00:57:39.027615  128534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:57:39.027652  128534 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 00:57:39.027676  128534 buildroot.go:174] setting up certificates
	I1210 00:57:39.027689  128534 provision.go:84] configureAuth start
	I1210 00:57:39.027703  128534 main.go:141] libmachine: (pause-190222) Calling .GetMachineName
	I1210 00:57:39.028013  128534 main.go:141] libmachine: (pause-190222) Calling .GetIP
	I1210 00:57:39.030774  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:39.031171  128534 main.go:141] libmachine: (pause-190222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:e0:4c", ip: ""} in network mk-pause-190222: {Iface:virbr3 ExpiryTime:2024-12-10 01:56:19 +0000 UTC Type:0 Mac:52:54:00:88:e0:4c Iaid: IPaddr:192.168.61.16 Prefix:24 Hostname:pause-190222 Clientid:01:52:54:00:88:e0:4c}
	I1210 00:57:39.031207  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined IP address 192.168.61.16 and MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:39.031325  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHHostname
	I1210 00:57:39.033717  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:39.034121  128534 main.go:141] libmachine: (pause-190222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:e0:4c", ip: ""} in network mk-pause-190222: {Iface:virbr3 ExpiryTime:2024-12-10 01:56:19 +0000 UTC Type:0 Mac:52:54:00:88:e0:4c Iaid: IPaddr:192.168.61.16 Prefix:24 Hostname:pause-190222 Clientid:01:52:54:00:88:e0:4c}
	I1210 00:57:39.034160  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined IP address 192.168.61.16 and MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:39.034227  128534 provision.go:143] copyHostCerts
	I1210 00:57:39.034320  128534 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 00:57:39.034344  128534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:57:39.034424  128534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 00:57:39.034577  128534 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 00:57:39.034591  128534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:57:39.034628  128534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 00:57:39.034728  128534 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 00:57:39.034739  128534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:57:39.034767  128534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 00:57:39.034855  128534 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.pause-190222 san=[127.0.0.1 192.168.61.16 localhost minikube pause-190222]
	I1210 00:57:39.104059  128534 provision.go:177] copyRemoteCerts
	I1210 00:57:39.104119  128534 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 00:57:39.104144  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHHostname
	I1210 00:57:39.107321  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:39.107729  128534 main.go:141] libmachine: (pause-190222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:e0:4c", ip: ""} in network mk-pause-190222: {Iface:virbr3 ExpiryTime:2024-12-10 01:56:19 +0000 UTC Type:0 Mac:52:54:00:88:e0:4c Iaid: IPaddr:192.168.61.16 Prefix:24 Hostname:pause-190222 Clientid:01:52:54:00:88:e0:4c}
	I1210 00:57:39.107759  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined IP address 192.168.61.16 and MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:39.107949  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHPort
	I1210 00:57:39.108128  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHKeyPath
	I1210 00:57:39.108286  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHUsername
	I1210 00:57:39.108467  128534 sshutil.go:53] new ssh client: &{IP:192.168.61.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/pause-190222/id_rsa Username:docker}
	I1210 00:57:39.193415  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 00:57:39.217352  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1210 00:57:39.242198  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 00:57:39.266405  128534 provision.go:87] duration metric: took 238.700547ms to configureAuth
	I1210 00:57:39.266441  128534 buildroot.go:189] setting minikube options for container-runtime
	I1210 00:57:39.266724  128534 config.go:182] Loaded profile config "pause-190222": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:57:39.266900  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHHostname
	I1210 00:57:39.269953  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:39.270328  128534 main.go:141] libmachine: (pause-190222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:e0:4c", ip: ""} in network mk-pause-190222: {Iface:virbr3 ExpiryTime:2024-12-10 01:56:19 +0000 UTC Type:0 Mac:52:54:00:88:e0:4c Iaid: IPaddr:192.168.61.16 Prefix:24 Hostname:pause-190222 Clientid:01:52:54:00:88:e0:4c}
	I1210 00:57:39.270367  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined IP address 192.168.61.16 and MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:39.270580  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHPort
	I1210 00:57:39.270776  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHKeyPath
	I1210 00:57:39.270951  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHKeyPath
	I1210 00:57:39.271100  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHUsername
	I1210 00:57:39.271338  128534 main.go:141] libmachine: Using SSH client type: native
	I1210 00:57:39.271587  128534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.16 22 <nil> <nil>}
	I1210 00:57:39.271617  128534 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 00:57:44.926388  128534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 00:57:44.926419  128534 machine.go:96] duration metric: took 6.271676916s to provisionDockerMachine
	I1210 00:57:44.926435  128534 start.go:293] postStartSetup for "pause-190222" (driver="kvm2")
	I1210 00:57:44.926447  128534 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 00:57:44.926497  128534 main.go:141] libmachine: (pause-190222) Calling .DriverName
	I1210 00:57:44.926880  128534 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 00:57:44.926914  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHHostname
	I1210 00:57:44.930091  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:44.930498  128534 main.go:141] libmachine: (pause-190222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:e0:4c", ip: ""} in network mk-pause-190222: {Iface:virbr3 ExpiryTime:2024-12-10 01:56:19 +0000 UTC Type:0 Mac:52:54:00:88:e0:4c Iaid: IPaddr:192.168.61.16 Prefix:24 Hostname:pause-190222 Clientid:01:52:54:00:88:e0:4c}
	I1210 00:57:44.930519  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined IP address 192.168.61.16 and MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:44.930762  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHPort
	I1210 00:57:44.930939  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHKeyPath
	I1210 00:57:44.931086  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHUsername
	I1210 00:57:44.931243  128534 sshutil.go:53] new ssh client: &{IP:192.168.61.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/pause-190222/id_rsa Username:docker}
	I1210 00:57:45.024014  128534 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 00:57:45.028177  128534 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 00:57:45.028199  128534 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 00:57:45.028256  128534 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 00:57:45.028369  128534 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 00:57:45.028489  128534 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 00:57:45.037168  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:57:45.059602  128534 start.go:296] duration metric: took 133.152236ms for postStartSetup
	I1210 00:57:45.059642  128534 fix.go:56] duration metric: took 6.432664718s for fixHost
	I1210 00:57:45.059669  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHHostname
	I1210 00:57:45.062514  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:45.062917  128534 main.go:141] libmachine: (pause-190222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:e0:4c", ip: ""} in network mk-pause-190222: {Iface:virbr3 ExpiryTime:2024-12-10 01:56:19 +0000 UTC Type:0 Mac:52:54:00:88:e0:4c Iaid: IPaddr:192.168.61.16 Prefix:24 Hostname:pause-190222 Clientid:01:52:54:00:88:e0:4c}
	I1210 00:57:45.062955  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined IP address 192.168.61.16 and MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:45.063132  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHPort
	I1210 00:57:45.063352  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHKeyPath
	I1210 00:57:45.063515  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHKeyPath
	I1210 00:57:45.063657  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHUsername
	I1210 00:57:45.063796  128534 main.go:141] libmachine: Using SSH client type: native
	I1210 00:57:45.064006  128534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.16 22 <nil> <nil>}
	I1210 00:57:45.064020  128534 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 00:57:45.171707  128534 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792265.164726132
	
	I1210 00:57:45.171731  128534 fix.go:216] guest clock: 1733792265.164726132
	I1210 00:57:45.171738  128534 fix.go:229] Guest: 2024-12-10 00:57:45.164726132 +0000 UTC Remote: 2024-12-10 00:57:45.059647741 +0000 UTC m=+14.225652171 (delta=105.078391ms)
	I1210 00:57:45.171759  128534 fix.go:200] guest clock delta is within tolerance: 105.078391ms
	I1210 00:57:45.171764  128534 start.go:83] releasing machines lock for "pause-190222", held for 6.544815054s
	I1210 00:57:45.171784  128534 main.go:141] libmachine: (pause-190222) Calling .DriverName
	I1210 00:57:45.172055  128534 main.go:141] libmachine: (pause-190222) Calling .GetIP
	I1210 00:57:45.174923  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:45.175300  128534 main.go:141] libmachine: (pause-190222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:e0:4c", ip: ""} in network mk-pause-190222: {Iface:virbr3 ExpiryTime:2024-12-10 01:56:19 +0000 UTC Type:0 Mac:52:54:00:88:e0:4c Iaid: IPaddr:192.168.61.16 Prefix:24 Hostname:pause-190222 Clientid:01:52:54:00:88:e0:4c}
	I1210 00:57:45.175331  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined IP address 192.168.61.16 and MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:45.175460  128534 main.go:141] libmachine: (pause-190222) Calling .DriverName
	I1210 00:57:45.175956  128534 main.go:141] libmachine: (pause-190222) Calling .DriverName
	I1210 00:57:45.176130  128534 main.go:141] libmachine: (pause-190222) Calling .DriverName
	I1210 00:57:45.176233  128534 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 00:57:45.176272  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHHostname
	I1210 00:57:45.176398  128534 ssh_runner.go:195] Run: cat /version.json
	I1210 00:57:45.176431  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHHostname
	I1210 00:57:45.179181  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:45.179529  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:45.179684  128534 main.go:141] libmachine: (pause-190222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:e0:4c", ip: ""} in network mk-pause-190222: {Iface:virbr3 ExpiryTime:2024-12-10 01:56:19 +0000 UTC Type:0 Mac:52:54:00:88:e0:4c Iaid: IPaddr:192.168.61.16 Prefix:24 Hostname:pause-190222 Clientid:01:52:54:00:88:e0:4c}
	I1210 00:57:45.179713  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined IP address 192.168.61.16 and MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:45.179934  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHPort
	I1210 00:57:45.180073  128534 main.go:141] libmachine: (pause-190222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:e0:4c", ip: ""} in network mk-pause-190222: {Iface:virbr3 ExpiryTime:2024-12-10 01:56:19 +0000 UTC Type:0 Mac:52:54:00:88:e0:4c Iaid: IPaddr:192.168.61.16 Prefix:24 Hostname:pause-190222 Clientid:01:52:54:00:88:e0:4c}
	I1210 00:57:45.180097  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined IP address 192.168.61.16 and MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:45.180274  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHKeyPath
	I1210 00:57:45.180286  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHPort
	I1210 00:57:45.180471  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHUsername
	I1210 00:57:45.180475  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHKeyPath
	I1210 00:57:45.180654  128534 main.go:141] libmachine: (pause-190222) Calling .GetSSHUsername
	I1210 00:57:45.180659  128534 sshutil.go:53] new ssh client: &{IP:192.168.61.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/pause-190222/id_rsa Username:docker}
	I1210 00:57:45.180823  128534 sshutil.go:53] new ssh client: &{IP:192.168.61.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/pause-190222/id_rsa Username:docker}
	I1210 00:57:45.310477  128534 ssh_runner.go:195] Run: systemctl --version
	I1210 00:57:45.347282  128534 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 00:57:45.673126  128534 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 00:57:45.703056  128534 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 00:57:45.703136  128534 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 00:57:45.729453  128534 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1210 00:57:45.729478  128534 start.go:495] detecting cgroup driver to use...
	I1210 00:57:45.729533  128534 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 00:57:45.786126  128534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 00:57:45.875791  128534 docker.go:217] disabling cri-docker service (if available) ...
	I1210 00:57:45.875885  128534 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 00:57:45.897019  128534 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 00:57:45.940479  128534 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 00:57:46.263169  128534 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 00:57:46.487105  128534 docker.go:233] disabling docker service ...
	I1210 00:57:46.487262  128534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 00:57:46.519306  128534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 00:57:46.544424  128534 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 00:57:46.877494  128534 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 00:57:47.186144  128534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 00:57:47.208045  128534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 00:57:47.237490  128534 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 00:57:47.237560  128534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:57:47.254636  128534 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 00:57:47.254730  128534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:57:47.272752  128534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:57:47.292994  128534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:57:47.311967  128534 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 00:57:47.326585  128534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:57:47.339831  128534 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:57:47.353084  128534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:57:47.368432  128534 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 00:57:47.380605  128534 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 00:57:47.392036  128534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:57:47.667083  128534 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 00:57:57.770947  128534 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.103737837s)
	I1210 00:57:57.770989  128534 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 00:57:57.771051  128534 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 00:57:57.775750  128534 start.go:563] Will wait 60s for crictl version
	I1210 00:57:57.775814  128534 ssh_runner.go:195] Run: which crictl
	I1210 00:57:57.779588  128534 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:57:57.824436  128534 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 00:57:57.824555  128534 ssh_runner.go:195] Run: crio --version
	I1210 00:57:57.864778  128534 ssh_runner.go:195] Run: crio --version
	I1210 00:57:57.897142  128534 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 00:57:57.898359  128534 main.go:141] libmachine: (pause-190222) Calling .GetIP
	I1210 00:57:57.901900  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:57.902399  128534 main.go:141] libmachine: (pause-190222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:e0:4c", ip: ""} in network mk-pause-190222: {Iface:virbr3 ExpiryTime:2024-12-10 01:56:19 +0000 UTC Type:0 Mac:52:54:00:88:e0:4c Iaid: IPaddr:192.168.61.16 Prefix:24 Hostname:pause-190222 Clientid:01:52:54:00:88:e0:4c}
	I1210 00:57:57.902430  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined IP address 192.168.61.16 and MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:57.902654  128534 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1210 00:57:57.907443  128534 kubeadm.go:883] updating cluster {Name:pause-190222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:pause-190222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.16 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-pl
ugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 00:57:57.907611  128534 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:57:57.907668  128534 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:57:57.953325  128534 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 00:57:57.953347  128534 crio.go:433] Images already preloaded, skipping extraction
	I1210 00:57:57.953408  128534 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:57:57.986689  128534 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 00:57:57.986713  128534 cache_images.go:84] Images are preloaded, skipping loading
	I1210 00:57:57.986721  128534 kubeadm.go:934] updating node { 192.168.61.16 8443 v1.31.2 crio true true} ...
	I1210 00:57:57.986810  128534 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-190222 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:pause-190222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:57:57.986880  128534 ssh_runner.go:195] Run: crio config
	I1210 00:57:58.031020  128534 cni.go:84] Creating CNI manager for ""
	I1210 00:57:58.031049  128534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:57:58.031064  128534 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 00:57:58.031095  128534 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.16 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-190222 NodeName:pause-190222 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 00:57:58.031256  128534 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-190222"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.16"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.16"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 00:57:58.031315  128534 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:57:58.040815  128534 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 00:57:58.040879  128534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 00:57:58.049481  128534 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1210 00:57:58.064248  128534 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:57:58.078945  128534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1210 00:57:58.095789  128534 ssh_runner.go:195] Run: grep 192.168.61.16	control-plane.minikube.internal$ /etc/hosts
	I1210 00:57:58.099504  128534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:57:58.251534  128534 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:57:58.268727  128534 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/pause-190222 for IP: 192.168.61.16
	I1210 00:57:58.268751  128534 certs.go:194] generating shared ca certs ...
	I1210 00:57:58.268771  128534 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:57:58.268936  128534 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 00:57:58.269000  128534 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 00:57:58.269015  128534 certs.go:256] generating profile certs ...
	I1210 00:57:58.269125  128534 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/pause-190222/client.key
	I1210 00:57:58.269202  128534 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/pause-190222/apiserver.key.2cf55b9d
	I1210 00:57:58.269262  128534 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/pause-190222/proxy-client.key
	I1210 00:57:58.269412  128534 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 00:57:58.269457  128534 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 00:57:58.269472  128534 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:57:58.269511  128534 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 00:57:58.269548  128534 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:57:58.269584  128534 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 00:57:58.269687  128534 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:57:58.270322  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:57:58.297506  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 00:57:58.318831  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:57:58.342153  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:57:58.367980  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/pause-190222/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 00:57:58.391807  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/pause-190222/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 00:57:58.413797  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/pause-190222/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:57:58.436987  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/pause-190222/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:57:58.458710  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:57:58.482456  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 00:57:58.506013  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 00:57:58.529888  128534 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 00:57:58.548163  128534 ssh_runner.go:195] Run: openssl version
	I1210 00:57:58.553818  128534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:57:58.563481  128534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:57:58.567656  128534 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:57:58.567709  128534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:57:58.572854  128534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:57:58.581252  128534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 00:57:58.591326  128534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 00:57:58.595357  128534 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 00:57:58.595399  128534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 00:57:58.601476  128534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 00:57:58.609871  128534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 00:57:58.619890  128534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 00:57:58.623899  128534 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 00:57:58.623985  128534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 00:57:58.629036  128534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:57:58.637715  128534 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:57:58.641836  128534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 00:57:58.647197  128534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 00:57:58.652228  128534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 00:57:58.657663  128534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 00:57:58.663150  128534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 00:57:58.668420  128534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 00:57:58.673661  128534 kubeadm.go:392] StartCluster: {Name:pause-190222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:pause-190222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.16 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugi
n:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:57:58.673798  128534 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 00:57:58.673843  128534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:57:58.708864  128534 cri.go:89] found id: "a1f69f90044d6fbc6de407021244b7ec3f9cec8112ca68b20b2539968404fb57"
	I1210 00:57:58.708885  128534 cri.go:89] found id: "de46a5cc48bb535d94e325bbdf49c9afa557c95f7e669ca7e289c378a45c0a5b"
	I1210 00:57:58.708889  128534 cri.go:89] found id: "d03c3df46733c24835e661fd436ba9b927491301d14c6c8c9b834b5d3d2f9147"
	I1210 00:57:58.708892  128534 cri.go:89] found id: "bd88ef94577131bfec1ede1afdaa25cf65b0f683ae7d8592eedcc89cb11f1642"
	I1210 00:57:58.708894  128534 cri.go:89] found id: "12697fb168a0e016c30221b175973f8e04f1e14a38e6b7f077f1b95ae30e1fd4"
	I1210 00:57:58.708897  128534 cri.go:89] found id: "c3eda879b9076b93a2e7772f444bffb5b271d3f1a513a616838cfacc87f1ed58"
	I1210 00:57:58.708905  128534 cri.go:89] found id: "99e3b2b1dc04d20efecfd2d1bd8c51b00d84917dda00532ef75849b6ce37f0b3"
	I1210 00:57:58.708907  128534 cri.go:89] found id: "66cd81c3acf40ed1670c2fd4997c82933af91c36d38823ca5faabb6054ef7115"
	I1210 00:57:58.708910  128534 cri.go:89] found id: "9962894ee1b8529359d13a937081c939934c1b41ec4d2c2f59a55a5c23a77215"
	I1210 00:57:58.708915  128534 cri.go:89] found id: "5552970cadc16093ec6dc84d83b705599d10fb5eff646bb22e598222a7233a4c"
	I1210 00:57:58.708918  128534 cri.go:89] found id: "d5dc4b4d1f7db09900513c5da9c536f1a550fa3cb2d32020e8fc1b704ed4a9b1"
	I1210 00:57:58.708921  128534 cri.go:89] found id: "3676f10beda73fde74d9b78e3d29a09a42ef23926bcc4df23b85ba7491532cc1"
	I1210 00:57:58.708923  128534 cri.go:89] found id: ""
	I1210 00:57:58.708963  128534 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-190222 -n pause-190222
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-190222 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-190222 logs -n 25: (1.384842661s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-796478 sudo              | cilium-796478             | jenkins | v1.34.0 | 10 Dec 24 00:54 UTC |                     |
	|         | systemctl cat crio --no-pager      |                           |         |         |                     |                     |
	| ssh     | -p cilium-796478 sudo find         | cilium-796478             | jenkins | v1.34.0 | 10 Dec 24 00:54 UTC |                     |
	|         | /etc/crio -type f -exec sh -c      |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;               |                           |         |         |                     |                     |
	| ssh     | -p cilium-796478 sudo crio         | cilium-796478             | jenkins | v1.34.0 | 10 Dec 24 00:54 UTC |                     |
	|         | config                             |                           |         |         |                     |                     |
	| delete  | -p cilium-796478                   | cilium-796478             | jenkins | v1.34.0 | 10 Dec 24 00:54 UTC | 10 Dec 24 00:54 UTC |
	| start   | -p stopped-upgrade-988830          | minikube                  | jenkins | v1.26.0 | 10 Dec 24 00:54 UTC | 10 Dec 24 00:55 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-971901 sudo        | NoKubernetes-971901       | jenkins | v1.34.0 | 10 Dec 24 00:54 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-971901             | NoKubernetes-971901       | jenkins | v1.34.0 | 10 Dec 24 00:55 UTC | 10 Dec 24 00:55 UTC |
	| start   | -p NoKubernetes-971901             | NoKubernetes-971901       | jenkins | v1.34.0 | 10 Dec 24 00:55 UTC | 10 Dec 24 00:55 UTC |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-988830 stop        | minikube                  | jenkins | v1.26.0 | 10 Dec 24 00:55 UTC | 10 Dec 24 00:55 UTC |
	| delete  | -p running-upgrade-993049          | running-upgrade-993049    | jenkins | v1.34.0 | 10 Dec 24 00:55 UTC | 10 Dec 24 00:55 UTC |
	| start   | -p stopped-upgrade-988830          | stopped-upgrade-988830    | jenkins | v1.34.0 | 10 Dec 24 00:55 UTC | 10 Dec 24 00:56 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-190222 --memory=2048      | pause-190222              | jenkins | v1.34.0 | 10 Dec 24 00:55 UTC | 10 Dec 24 00:57 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-971901 sudo        | NoKubernetes-971901       | jenkins | v1.34.0 | 10 Dec 24 00:55 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-971901             | NoKubernetes-971901       | jenkins | v1.34.0 | 10 Dec 24 00:55 UTC | 10 Dec 24 00:55 UTC |
	| start   | -p cert-expiration-290541          | cert-expiration-290541    | jenkins | v1.34.0 | 10 Dec 24 00:55 UTC | 10 Dec 24 00:57 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-988830          | stopped-upgrade-988830    | jenkins | v1.34.0 | 10 Dec 24 00:56 UTC | 10 Dec 24 00:56 UTC |
	| start   | -p force-systemd-flag-887293       | force-systemd-flag-887293 | jenkins | v1.34.0 | 10 Dec 24 00:56 UTC | 10 Dec 24 00:57 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-481624       | kubernetes-upgrade-481624 | jenkins | v1.34.0 | 10 Dec 24 00:57 UTC | 10 Dec 24 00:57 UTC |
	| start   | -p kubernetes-upgrade-481624       | kubernetes-upgrade-481624 | jenkins | v1.34.0 | 10 Dec 24 00:57 UTC | 10 Dec 24 00:57 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-190222                    | pause-190222              | jenkins | v1.34.0 | 10 Dec 24 00:57 UTC | 10 Dec 24 00:58 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-887293 ssh cat  | force-systemd-flag-887293 | jenkins | v1.34.0 | 10 Dec 24 00:57 UTC | 10 Dec 24 00:57 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-887293       | force-systemd-flag-887293 | jenkins | v1.34.0 | 10 Dec 24 00:57 UTC | 10 Dec 24 00:57 UTC |
	| start   | -p cert-options-086522             | cert-options-086522       | jenkins | v1.34.0 | 10 Dec 24 00:57 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1          |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15      |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost        |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com   |                           |         |         |                     |                     |
	|         | --apiserver-port=8555              |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-481624       | kubernetes-upgrade-481624 | jenkins | v1.34.0 | 10 Dec 24 00:57 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-481624       | kubernetes-upgrade-481624 | jenkins | v1.34.0 | 10 Dec 24 00:57 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/10 00:57:53
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 00:57:53.457779  129030 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:57:53.457927  129030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:57:53.457937  129030 out.go:358] Setting ErrFile to fd 2...
	I1210 00:57:53.457942  129030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:57:53.458170  129030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 00:57:53.458802  129030 out.go:352] Setting JSON to false
	I1210 00:57:53.459804  129030 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":9624,"bootTime":1733782649,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:57:53.459910  129030 start.go:139] virtualization: kvm guest
	I1210 00:57:53.461847  129030 out.go:177] * [kubernetes-upgrade-481624] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 00:57:53.463179  129030 notify.go:220] Checking for updates...
	I1210 00:57:53.463192  129030 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 00:57:53.464336  129030 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:57:53.465643  129030 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:57:53.466977  129030 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:57:53.468291  129030 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:57:53.469378  129030 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:57:53.470884  129030 config.go:182] Loaded profile config "kubernetes-upgrade-481624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:57:53.471312  129030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:57:53.471368  129030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:57:53.487409  129030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36045
	I1210 00:57:53.487956  129030 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:57:53.488634  129030 main.go:141] libmachine: Using API Version  1
	I1210 00:57:53.488660  129030 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:57:53.489192  129030 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:57:53.489357  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .DriverName
	I1210 00:57:53.489576  129030 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:57:53.489852  129030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:57:53.489887  129030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:57:53.506345  129030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44173
	I1210 00:57:53.506928  129030 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:57:53.507581  129030 main.go:141] libmachine: Using API Version  1
	I1210 00:57:53.507597  129030 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:57:53.507984  129030 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:57:53.508228  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .DriverName
	I1210 00:57:53.544579  129030 out.go:177] * Using the kvm2 driver based on existing profile
	I1210 00:57:53.545997  129030 start.go:297] selected driver: kvm2
	I1210 00:57:53.546015  129030 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-481624 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-481624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.207 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:57:53.546148  129030 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:57:53.547282  129030 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:57:53.547380  129030 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 00:57:53.561835  129030 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1210 00:57:53.562405  129030 cni.go:84] Creating CNI manager for ""
	I1210 00:57:53.562476  129030 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:57:53.562528  129030 start.go:340] cluster config:
	{Name:kubernetes-upgrade-481624 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-481624 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.207 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:57:53.562757  129030 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:57:53.564919  129030 out.go:177] * Starting "kubernetes-upgrade-481624" primary control-plane node in "kubernetes-upgrade-481624" cluster
	I1210 00:57:57.770947  128534 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.103737837s)
	I1210 00:57:57.770989  128534 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 00:57:57.771051  128534 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 00:57:57.775750  128534 start.go:563] Will wait 60s for crictl version
	I1210 00:57:57.775814  128534 ssh_runner.go:195] Run: which crictl
	I1210 00:57:57.779588  128534 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:57:57.824436  128534 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 00:57:57.824555  128534 ssh_runner.go:195] Run: crio --version
	I1210 00:57:57.864778  128534 ssh_runner.go:195] Run: crio --version
	I1210 00:57:57.897142  128534 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 00:57:55.354825  128723 main.go:141] libmachine: (cert-options-086522) DBG | domain cert-options-086522 has defined MAC address 52:54:00:3f:b3:9d in network mk-cert-options-086522
	I1210 00:57:55.355284  128723 main.go:141] libmachine: (cert-options-086522) DBG | unable to find current IP address of domain cert-options-086522 in network mk-cert-options-086522
	I1210 00:57:55.355305  128723 main.go:141] libmachine: (cert-options-086522) DBG | I1210 00:57:55.355230  128851 retry.go:31] will retry after 2.670649156s: waiting for machine to come up
	I1210 00:57:58.027859  128723 main.go:141] libmachine: (cert-options-086522) DBG | domain cert-options-086522 has defined MAC address 52:54:00:3f:b3:9d in network mk-cert-options-086522
	I1210 00:57:58.028344  128723 main.go:141] libmachine: (cert-options-086522) DBG | unable to find current IP address of domain cert-options-086522 in network mk-cert-options-086522
	I1210 00:57:58.028365  128723 main.go:141] libmachine: (cert-options-086522) DBG | I1210 00:57:58.028303  128851 retry.go:31] will retry after 3.575905417s: waiting for machine to come up
	I1210 00:57:53.566015  129030 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:57:53.566070  129030 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1210 00:57:53.566086  129030 cache.go:56] Caching tarball of preloaded images
	I1210 00:57:53.566177  129030 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 00:57:53.566194  129030 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 00:57:53.566353  129030 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/config.json ...
	I1210 00:57:53.566624  129030 start.go:360] acquireMachinesLock for kubernetes-upgrade-481624: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:57:57.898359  128534 main.go:141] libmachine: (pause-190222) Calling .GetIP
	I1210 00:57:57.901900  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:57.902399  128534 main.go:141] libmachine: (pause-190222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:e0:4c", ip: ""} in network mk-pause-190222: {Iface:virbr3 ExpiryTime:2024-12-10 01:56:19 +0000 UTC Type:0 Mac:52:54:00:88:e0:4c Iaid: IPaddr:192.168.61.16 Prefix:24 Hostname:pause-190222 Clientid:01:52:54:00:88:e0:4c}
	I1210 00:57:57.902430  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined IP address 192.168.61.16 and MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:57.902654  128534 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1210 00:57:57.907443  128534 kubeadm.go:883] updating cluster {Name:pause-190222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:pause-190222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.16 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-pl
ugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 00:57:57.907611  128534 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:57:57.907668  128534 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:57:57.953325  128534 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 00:57:57.953347  128534 crio.go:433] Images already preloaded, skipping extraction
	I1210 00:57:57.953408  128534 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:57:57.986689  128534 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 00:57:57.986713  128534 cache_images.go:84] Images are preloaded, skipping loading
	I1210 00:57:57.986721  128534 kubeadm.go:934] updating node { 192.168.61.16 8443 v1.31.2 crio true true} ...
	I1210 00:57:57.986810  128534 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-190222 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:pause-190222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:57:57.986880  128534 ssh_runner.go:195] Run: crio config
	I1210 00:57:58.031020  128534 cni.go:84] Creating CNI manager for ""
	I1210 00:57:58.031049  128534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:57:58.031064  128534 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 00:57:58.031095  128534 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.16 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-190222 NodeName:pause-190222 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 00:57:58.031256  128534 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-190222"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.16"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.16"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 00:57:58.031315  128534 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:57:58.040815  128534 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 00:57:58.040879  128534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 00:57:58.049481  128534 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1210 00:57:58.064248  128534 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:57:58.078945  128534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1210 00:57:58.095789  128534 ssh_runner.go:195] Run: grep 192.168.61.16	control-plane.minikube.internal$ /etc/hosts
	I1210 00:57:58.099504  128534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:57:58.251534  128534 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:57:58.268727  128534 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/pause-190222 for IP: 192.168.61.16
	I1210 00:57:58.268751  128534 certs.go:194] generating shared ca certs ...
	I1210 00:57:58.268771  128534 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:57:58.268936  128534 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 00:57:58.269000  128534 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 00:57:58.269015  128534 certs.go:256] generating profile certs ...
	I1210 00:57:58.269125  128534 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/pause-190222/client.key
	I1210 00:57:58.269202  128534 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/pause-190222/apiserver.key.2cf55b9d
	I1210 00:57:58.269262  128534 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/pause-190222/proxy-client.key
	I1210 00:57:58.269412  128534 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 00:57:58.269457  128534 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 00:57:58.269472  128534 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:57:58.269511  128534 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 00:57:58.269548  128534 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:57:58.269584  128534 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 00:57:58.269687  128534 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:57:58.270322  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:57:58.297506  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 00:57:58.318831  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:57:58.342153  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:57:58.367980  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/pause-190222/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 00:57:58.391807  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/pause-190222/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 00:57:58.413797  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/pause-190222/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:57:58.436987  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/pause-190222/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:57:58.458710  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:57:58.482456  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 00:57:58.506013  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 00:57:58.529888  128534 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 00:57:58.548163  128534 ssh_runner.go:195] Run: openssl version
	I1210 00:57:58.553818  128534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:57:58.563481  128534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:57:58.567656  128534 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:57:58.567709  128534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:57:58.572854  128534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:57:58.581252  128534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 00:57:58.591326  128534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 00:57:58.595357  128534 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 00:57:58.595399  128534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 00:57:58.601476  128534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 00:57:58.609871  128534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 00:57:58.619890  128534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 00:57:58.623899  128534 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 00:57:58.623985  128534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 00:57:58.629036  128534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:57:58.637715  128534 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:57:58.641836  128534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 00:57:58.647197  128534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 00:57:58.652228  128534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 00:57:58.657663  128534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 00:57:58.663150  128534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 00:57:58.668420  128534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 00:57:58.673661  128534 kubeadm.go:392] StartCluster: {Name:pause-190222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:pause-190222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.16 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugi
n:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:57:58.673798  128534 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 00:57:58.673843  128534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:57:58.708864  128534 cri.go:89] found id: "a1f69f90044d6fbc6de407021244b7ec3f9cec8112ca68b20b2539968404fb57"
	I1210 00:57:58.708885  128534 cri.go:89] found id: "de46a5cc48bb535d94e325bbdf49c9afa557c95f7e669ca7e289c378a45c0a5b"
	I1210 00:57:58.708889  128534 cri.go:89] found id: "d03c3df46733c24835e661fd436ba9b927491301d14c6c8c9b834b5d3d2f9147"
	I1210 00:57:58.708892  128534 cri.go:89] found id: "bd88ef94577131bfec1ede1afdaa25cf65b0f683ae7d8592eedcc89cb11f1642"
	I1210 00:57:58.708894  128534 cri.go:89] found id: "12697fb168a0e016c30221b175973f8e04f1e14a38e6b7f077f1b95ae30e1fd4"
	I1210 00:57:58.708897  128534 cri.go:89] found id: "c3eda879b9076b93a2e7772f444bffb5b271d3f1a513a616838cfacc87f1ed58"
	I1210 00:57:58.708905  128534 cri.go:89] found id: "99e3b2b1dc04d20efecfd2d1bd8c51b00d84917dda00532ef75849b6ce37f0b3"
	I1210 00:57:58.708907  128534 cri.go:89] found id: "66cd81c3acf40ed1670c2fd4997c82933af91c36d38823ca5faabb6054ef7115"
	I1210 00:57:58.708910  128534 cri.go:89] found id: "9962894ee1b8529359d13a937081c939934c1b41ec4d2c2f59a55a5c23a77215"
	I1210 00:57:58.708915  128534 cri.go:89] found id: "5552970cadc16093ec6dc84d83b705599d10fb5eff646bb22e598222a7233a4c"
	I1210 00:57:58.708918  128534 cri.go:89] found id: "d5dc4b4d1f7db09900513c5da9c536f1a550fa3cb2d32020e8fc1b704ed4a9b1"
	I1210 00:57:58.708921  128534 cri.go:89] found id: "3676f10beda73fde74d9b78e3d29a09a42ef23926bcc4df23b85ba7491532cc1"
	I1210 00:57:58.708923  128534 cri.go:89] found id: ""
	I1210 00:57:58.708963  128534 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-190222 -n pause-190222
helpers_test.go:261: (dbg) Run:  kubectl --context pause-190222 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-190222 -n pause-190222
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-190222 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-190222 logs -n 25: (1.387133987s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-796478 sudo              | cilium-796478             | jenkins | v1.34.0 | 10 Dec 24 00:54 UTC |                     |
	|         | systemctl cat crio --no-pager      |                           |         |         |                     |                     |
	| ssh     | -p cilium-796478 sudo find         | cilium-796478             | jenkins | v1.34.0 | 10 Dec 24 00:54 UTC |                     |
	|         | /etc/crio -type f -exec sh -c      |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;               |                           |         |         |                     |                     |
	| ssh     | -p cilium-796478 sudo crio         | cilium-796478             | jenkins | v1.34.0 | 10 Dec 24 00:54 UTC |                     |
	|         | config                             |                           |         |         |                     |                     |
	| delete  | -p cilium-796478                   | cilium-796478             | jenkins | v1.34.0 | 10 Dec 24 00:54 UTC | 10 Dec 24 00:54 UTC |
	| start   | -p stopped-upgrade-988830          | minikube                  | jenkins | v1.26.0 | 10 Dec 24 00:54 UTC | 10 Dec 24 00:55 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-971901 sudo        | NoKubernetes-971901       | jenkins | v1.34.0 | 10 Dec 24 00:54 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-971901             | NoKubernetes-971901       | jenkins | v1.34.0 | 10 Dec 24 00:55 UTC | 10 Dec 24 00:55 UTC |
	| start   | -p NoKubernetes-971901             | NoKubernetes-971901       | jenkins | v1.34.0 | 10 Dec 24 00:55 UTC | 10 Dec 24 00:55 UTC |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-988830 stop        | minikube                  | jenkins | v1.26.0 | 10 Dec 24 00:55 UTC | 10 Dec 24 00:55 UTC |
	| delete  | -p running-upgrade-993049          | running-upgrade-993049    | jenkins | v1.34.0 | 10 Dec 24 00:55 UTC | 10 Dec 24 00:55 UTC |
	| start   | -p stopped-upgrade-988830          | stopped-upgrade-988830    | jenkins | v1.34.0 | 10 Dec 24 00:55 UTC | 10 Dec 24 00:56 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-190222 --memory=2048      | pause-190222              | jenkins | v1.34.0 | 10 Dec 24 00:55 UTC | 10 Dec 24 00:57 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-971901 sudo        | NoKubernetes-971901       | jenkins | v1.34.0 | 10 Dec 24 00:55 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-971901             | NoKubernetes-971901       | jenkins | v1.34.0 | 10 Dec 24 00:55 UTC | 10 Dec 24 00:55 UTC |
	| start   | -p cert-expiration-290541          | cert-expiration-290541    | jenkins | v1.34.0 | 10 Dec 24 00:55 UTC | 10 Dec 24 00:57 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-988830          | stopped-upgrade-988830    | jenkins | v1.34.0 | 10 Dec 24 00:56 UTC | 10 Dec 24 00:56 UTC |
	| start   | -p force-systemd-flag-887293       | force-systemd-flag-887293 | jenkins | v1.34.0 | 10 Dec 24 00:56 UTC | 10 Dec 24 00:57 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-481624       | kubernetes-upgrade-481624 | jenkins | v1.34.0 | 10 Dec 24 00:57 UTC | 10 Dec 24 00:57 UTC |
	| start   | -p kubernetes-upgrade-481624       | kubernetes-upgrade-481624 | jenkins | v1.34.0 | 10 Dec 24 00:57 UTC | 10 Dec 24 00:57 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-190222                    | pause-190222              | jenkins | v1.34.0 | 10 Dec 24 00:57 UTC | 10 Dec 24 00:58 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-887293 ssh cat  | force-systemd-flag-887293 | jenkins | v1.34.0 | 10 Dec 24 00:57 UTC | 10 Dec 24 00:57 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-887293       | force-systemd-flag-887293 | jenkins | v1.34.0 | 10 Dec 24 00:57 UTC | 10 Dec 24 00:57 UTC |
	| start   | -p cert-options-086522             | cert-options-086522       | jenkins | v1.34.0 | 10 Dec 24 00:57 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1          |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15      |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost        |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com   |                           |         |         |                     |                     |
	|         | --apiserver-port=8555              |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-481624       | kubernetes-upgrade-481624 | jenkins | v1.34.0 | 10 Dec 24 00:57 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-481624       | kubernetes-upgrade-481624 | jenkins | v1.34.0 | 10 Dec 24 00:57 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/10 00:57:53
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 00:57:53.457779  129030 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:57:53.457927  129030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:57:53.457937  129030 out.go:358] Setting ErrFile to fd 2...
	I1210 00:57:53.457942  129030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:57:53.458170  129030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 00:57:53.458802  129030 out.go:352] Setting JSON to false
	I1210 00:57:53.459804  129030 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":9624,"bootTime":1733782649,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:57:53.459910  129030 start.go:139] virtualization: kvm guest
	I1210 00:57:53.461847  129030 out.go:177] * [kubernetes-upgrade-481624] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 00:57:53.463179  129030 notify.go:220] Checking for updates...
	I1210 00:57:53.463192  129030 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 00:57:53.464336  129030 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:57:53.465643  129030 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:57:53.466977  129030 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:57:53.468291  129030 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:57:53.469378  129030 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:57:53.470884  129030 config.go:182] Loaded profile config "kubernetes-upgrade-481624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:57:53.471312  129030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:57:53.471368  129030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:57:53.487409  129030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36045
	I1210 00:57:53.487956  129030 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:57:53.488634  129030 main.go:141] libmachine: Using API Version  1
	I1210 00:57:53.488660  129030 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:57:53.489192  129030 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:57:53.489357  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .DriverName
	I1210 00:57:53.489576  129030 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:57:53.489852  129030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:57:53.489887  129030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:57:53.506345  129030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44173
	I1210 00:57:53.506928  129030 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:57:53.507581  129030 main.go:141] libmachine: Using API Version  1
	I1210 00:57:53.507597  129030 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:57:53.507984  129030 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:57:53.508228  129030 main.go:141] libmachine: (kubernetes-upgrade-481624) Calling .DriverName
	I1210 00:57:53.544579  129030 out.go:177] * Using the kvm2 driver based on existing profile
	I1210 00:57:53.545997  129030 start.go:297] selected driver: kvm2
	I1210 00:57:53.546015  129030 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-481624 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-481624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.207 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:57:53.546148  129030 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:57:53.547282  129030 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:57:53.547380  129030 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 00:57:53.561835  129030 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1210 00:57:53.562405  129030 cni.go:84] Creating CNI manager for ""
	I1210 00:57:53.562476  129030 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:57:53.562528  129030 start.go:340] cluster config:
	{Name:kubernetes-upgrade-481624 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-481624 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.207 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:57:53.562757  129030 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:57:53.564919  129030 out.go:177] * Starting "kubernetes-upgrade-481624" primary control-plane node in "kubernetes-upgrade-481624" cluster
	I1210 00:57:57.770947  128534 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.103737837s)
	I1210 00:57:57.770989  128534 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 00:57:57.771051  128534 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 00:57:57.775750  128534 start.go:563] Will wait 60s for crictl version
	I1210 00:57:57.775814  128534 ssh_runner.go:195] Run: which crictl
	I1210 00:57:57.779588  128534 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:57:57.824436  128534 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 00:57:57.824555  128534 ssh_runner.go:195] Run: crio --version
	I1210 00:57:57.864778  128534 ssh_runner.go:195] Run: crio --version
	I1210 00:57:57.897142  128534 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 00:57:55.354825  128723 main.go:141] libmachine: (cert-options-086522) DBG | domain cert-options-086522 has defined MAC address 52:54:00:3f:b3:9d in network mk-cert-options-086522
	I1210 00:57:55.355284  128723 main.go:141] libmachine: (cert-options-086522) DBG | unable to find current IP address of domain cert-options-086522 in network mk-cert-options-086522
	I1210 00:57:55.355305  128723 main.go:141] libmachine: (cert-options-086522) DBG | I1210 00:57:55.355230  128851 retry.go:31] will retry after 2.670649156s: waiting for machine to come up
	I1210 00:57:58.027859  128723 main.go:141] libmachine: (cert-options-086522) DBG | domain cert-options-086522 has defined MAC address 52:54:00:3f:b3:9d in network mk-cert-options-086522
	I1210 00:57:58.028344  128723 main.go:141] libmachine: (cert-options-086522) DBG | unable to find current IP address of domain cert-options-086522 in network mk-cert-options-086522
	I1210 00:57:58.028365  128723 main.go:141] libmachine: (cert-options-086522) DBG | I1210 00:57:58.028303  128851 retry.go:31] will retry after 3.575905417s: waiting for machine to come up
	I1210 00:57:53.566015  129030 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:57:53.566070  129030 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1210 00:57:53.566086  129030 cache.go:56] Caching tarball of preloaded images
	I1210 00:57:53.566177  129030 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 00:57:53.566194  129030 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 00:57:53.566353  129030 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/kubernetes-upgrade-481624/config.json ...
	I1210 00:57:53.566624  129030 start.go:360] acquireMachinesLock for kubernetes-upgrade-481624: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:57:57.898359  128534 main.go:141] libmachine: (pause-190222) Calling .GetIP
	I1210 00:57:57.901900  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:57.902399  128534 main.go:141] libmachine: (pause-190222) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:e0:4c", ip: ""} in network mk-pause-190222: {Iface:virbr3 ExpiryTime:2024-12-10 01:56:19 +0000 UTC Type:0 Mac:52:54:00:88:e0:4c Iaid: IPaddr:192.168.61.16 Prefix:24 Hostname:pause-190222 Clientid:01:52:54:00:88:e0:4c}
	I1210 00:57:57.902430  128534 main.go:141] libmachine: (pause-190222) DBG | domain pause-190222 has defined IP address 192.168.61.16 and MAC address 52:54:00:88:e0:4c in network mk-pause-190222
	I1210 00:57:57.902654  128534 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1210 00:57:57.907443  128534 kubeadm.go:883] updating cluster {Name:pause-190222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:pause-190222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.16 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-pl
ugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 00:57:57.907611  128534 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 00:57:57.907668  128534 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:57:57.953325  128534 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 00:57:57.953347  128534 crio.go:433] Images already preloaded, skipping extraction
	I1210 00:57:57.953408  128534 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:57:57.986689  128534 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 00:57:57.986713  128534 cache_images.go:84] Images are preloaded, skipping loading
	I1210 00:57:57.986721  128534 kubeadm.go:934] updating node { 192.168.61.16 8443 v1.31.2 crio true true} ...
	I1210 00:57:57.986810  128534 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-190222 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:pause-190222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:57:57.986880  128534 ssh_runner.go:195] Run: crio config
	I1210 00:57:58.031020  128534 cni.go:84] Creating CNI manager for ""
	I1210 00:57:58.031049  128534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:57:58.031064  128534 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 00:57:58.031095  128534 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.16 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-190222 NodeName:pause-190222 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 00:57:58.031256  128534 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-190222"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.16"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.16"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 00:57:58.031315  128534 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 00:57:58.040815  128534 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 00:57:58.040879  128534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 00:57:58.049481  128534 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1210 00:57:58.064248  128534 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:57:58.078945  128534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1210 00:57:58.095789  128534 ssh_runner.go:195] Run: grep 192.168.61.16	control-plane.minikube.internal$ /etc/hosts
	I1210 00:57:58.099504  128534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:57:58.251534  128534 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:57:58.268727  128534 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/pause-190222 for IP: 192.168.61.16
	I1210 00:57:58.268751  128534 certs.go:194] generating shared ca certs ...
	I1210 00:57:58.268771  128534 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:57:58.268936  128534 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 00:57:58.269000  128534 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 00:57:58.269015  128534 certs.go:256] generating profile certs ...
	I1210 00:57:58.269125  128534 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/pause-190222/client.key
	I1210 00:57:58.269202  128534 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/pause-190222/apiserver.key.2cf55b9d
	I1210 00:57:58.269262  128534 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/pause-190222/proxy-client.key
	I1210 00:57:58.269412  128534 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 00:57:58.269457  128534 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 00:57:58.269472  128534 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:57:58.269511  128534 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 00:57:58.269548  128534 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:57:58.269584  128534 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 00:57:58.269687  128534 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:57:58.270322  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:57:58.297506  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 00:57:58.318831  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:57:58.342153  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:57:58.367980  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/pause-190222/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1210 00:57:58.391807  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/pause-190222/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 00:57:58.413797  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/pause-190222/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:57:58.436987  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/pause-190222/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 00:57:58.458710  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:57:58.482456  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 00:57:58.506013  128534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 00:57:58.529888  128534 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 00:57:58.548163  128534 ssh_runner.go:195] Run: openssl version
	I1210 00:57:58.553818  128534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:57:58.563481  128534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:57:58.567656  128534 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:57:58.567709  128534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:57:58.572854  128534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:57:58.581252  128534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 00:57:58.591326  128534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 00:57:58.595357  128534 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 00:57:58.595399  128534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 00:57:58.601476  128534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 00:57:58.609871  128534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 00:57:58.619890  128534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 00:57:58.623899  128534 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 00:57:58.623985  128534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 00:57:58.629036  128534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:57:58.637715  128534 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:57:58.641836  128534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 00:57:58.647197  128534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 00:57:58.652228  128534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 00:57:58.657663  128534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 00:57:58.663150  128534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 00:57:58.668420  128534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 00:57:58.673661  128534 kubeadm.go:392] StartCluster: {Name:pause-190222 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:pause-190222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.16 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugi
n:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:57:58.673798  128534 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 00:57:58.673843  128534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:57:58.708864  128534 cri.go:89] found id: "a1f69f90044d6fbc6de407021244b7ec3f9cec8112ca68b20b2539968404fb57"
	I1210 00:57:58.708885  128534 cri.go:89] found id: "de46a5cc48bb535d94e325bbdf49c9afa557c95f7e669ca7e289c378a45c0a5b"
	I1210 00:57:58.708889  128534 cri.go:89] found id: "d03c3df46733c24835e661fd436ba9b927491301d14c6c8c9b834b5d3d2f9147"
	I1210 00:57:58.708892  128534 cri.go:89] found id: "bd88ef94577131bfec1ede1afdaa25cf65b0f683ae7d8592eedcc89cb11f1642"
	I1210 00:57:58.708894  128534 cri.go:89] found id: "12697fb168a0e016c30221b175973f8e04f1e14a38e6b7f077f1b95ae30e1fd4"
	I1210 00:57:58.708897  128534 cri.go:89] found id: "c3eda879b9076b93a2e7772f444bffb5b271d3f1a513a616838cfacc87f1ed58"
	I1210 00:57:58.708905  128534 cri.go:89] found id: "99e3b2b1dc04d20efecfd2d1bd8c51b00d84917dda00532ef75849b6ce37f0b3"
	I1210 00:57:58.708907  128534 cri.go:89] found id: "66cd81c3acf40ed1670c2fd4997c82933af91c36d38823ca5faabb6054ef7115"
	I1210 00:57:58.708910  128534 cri.go:89] found id: "9962894ee1b8529359d13a937081c939934c1b41ec4d2c2f59a55a5c23a77215"
	I1210 00:57:58.708915  128534 cri.go:89] found id: "5552970cadc16093ec6dc84d83b705599d10fb5eff646bb22e598222a7233a4c"
	I1210 00:57:58.708918  128534 cri.go:89] found id: "d5dc4b4d1f7db09900513c5da9c536f1a550fa3cb2d32020e8fc1b704ed4a9b1"
	I1210 00:57:58.708921  128534 cri.go:89] found id: "3676f10beda73fde74d9b78e3d29a09a42ef23926bcc4df23b85ba7491532cc1"
	I1210 00:57:58.708923  128534 cri.go:89] found id: ""
	I1210 00:57:58.708963  128534 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-190222 -n pause-190222
helpers_test.go:261: (dbg) Run:  kubectl --context pause-190222 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (53.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (270.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-094470 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-094470 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m30.080983378s)

                                                
                                                
-- stdout --
	* [old-k8s-version-094470] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-094470" primary control-plane node in "old-k8s-version-094470" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 00:58:25.880585  129622 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:58:25.880691  129622 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:58:25.880703  129622 out.go:358] Setting ErrFile to fd 2...
	I1210 00:58:25.880709  129622 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:58:25.880898  129622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 00:58:25.881455  129622 out.go:352] Setting JSON to false
	I1210 00:58:25.882487  129622 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":9657,"bootTime":1733782649,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:58:25.882621  129622 start.go:139] virtualization: kvm guest
	I1210 00:58:25.884722  129622 out.go:177] * [old-k8s-version-094470] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 00:58:25.885982  129622 notify.go:220] Checking for updates...
	I1210 00:58:25.886023  129622 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 00:58:25.887396  129622 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:58:25.888834  129622 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:58:25.890246  129622 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:58:25.891433  129622 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:58:25.892707  129622 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:58:25.894482  129622 config.go:182] Loaded profile config "cert-expiration-290541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:58:25.894653  129622 config.go:182] Loaded profile config "cert-options-086522": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:58:25.894783  129622 config.go:182] Loaded profile config "kubernetes-upgrade-481624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:58:25.894895  129622 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:58:25.932681  129622 out.go:177] * Using the kvm2 driver based on user configuration
	I1210 00:58:25.933835  129622 start.go:297] selected driver: kvm2
	I1210 00:58:25.933850  129622 start.go:901] validating driver "kvm2" against <nil>
	I1210 00:58:25.933879  129622 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:58:25.934681  129622 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:58:25.934757  129622 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 00:58:25.949991  129622 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1210 00:58:25.950058  129622 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1210 00:58:25.950475  129622 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 00:58:25.950519  129622 cni.go:84] Creating CNI manager for ""
	I1210 00:58:25.950597  129622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:58:25.950610  129622 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 00:58:25.950666  129622 start.go:340] cluster config:
	{Name:old-k8s-version-094470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:58:25.950763  129622 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 00:58:25.952988  129622 out.go:177] * Starting "old-k8s-version-094470" primary control-plane node in "old-k8s-version-094470" cluster
	I1210 00:58:25.954126  129622 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1210 00:58:25.954171  129622 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1210 00:58:25.954182  129622 cache.go:56] Caching tarball of preloaded images
	I1210 00:58:25.954261  129622 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 00:58:25.954277  129622 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1210 00:58:25.954378  129622 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/config.json ...
	I1210 00:58:25.954400  129622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/config.json: {Name:mk138881a5fea549756e7918ce27b86db730af46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:58:25.954536  129622 start.go:360] acquireMachinesLock for old-k8s-version-094470: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 00:58:25.954595  129622 start.go:364] duration metric: took 43.118µs to acquireMachinesLock for "old-k8s-version-094470"
	I1210 00:58:25.954616  129622 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-094470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 00:58:25.954679  129622 start.go:125] createHost starting for "" (driver="kvm2")
	I1210 00:58:25.956095  129622 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1210 00:58:25.956243  129622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:58:25.956282  129622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:58:25.970268  129622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34783
	I1210 00:58:25.970750  129622 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:58:25.971315  129622 main.go:141] libmachine: Using API Version  1
	I1210 00:58:25.971341  129622 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:58:25.971676  129622 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:58:25.971897  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 00:58:25.972057  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 00:58:25.972222  129622 start.go:159] libmachine.API.Create for "old-k8s-version-094470" (driver="kvm2")
	I1210 00:58:25.972255  129622 client.go:168] LocalClient.Create starting
	I1210 00:58:25.972280  129622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem
	I1210 00:58:25.972306  129622 main.go:141] libmachine: Decoding PEM data...
	I1210 00:58:25.972322  129622 main.go:141] libmachine: Parsing certificate...
	I1210 00:58:25.972375  129622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem
	I1210 00:58:25.972393  129622 main.go:141] libmachine: Decoding PEM data...
	I1210 00:58:25.972405  129622 main.go:141] libmachine: Parsing certificate...
	I1210 00:58:25.972425  129622 main.go:141] libmachine: Running pre-create checks...
	I1210 00:58:25.972434  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .PreCreateCheck
	I1210 00:58:25.972777  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetConfigRaw
	I1210 00:58:25.973220  129622 main.go:141] libmachine: Creating machine...
	I1210 00:58:25.973238  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .Create
	I1210 00:58:25.973385  129622 main.go:141] libmachine: (old-k8s-version-094470) Creating KVM machine...
	I1210 00:58:25.974732  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | found existing default KVM network
	I1210 00:58:25.976120  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:25.975963  129664 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:5d:1c:a4} reservation:<nil>}
	I1210 00:58:25.977051  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:25.976915  129664 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:48:50:21} reservation:<nil>}
	I1210 00:58:25.978304  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:25.978225  129664 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000380740}
	I1210 00:58:25.978339  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | created network xml: 
	I1210 00:58:25.978352  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | <network>
	I1210 00:58:25.978364  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG |   <name>mk-old-k8s-version-094470</name>
	I1210 00:58:25.978378  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG |   <dns enable='no'/>
	I1210 00:58:25.978393  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG |   
	I1210 00:58:25.978405  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1210 00:58:25.978427  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG |     <dhcp>
	I1210 00:58:25.978439  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1210 00:58:25.978449  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG |     </dhcp>
	I1210 00:58:25.978479  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG |   </ip>
	I1210 00:58:25.978502  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG |   
	I1210 00:58:25.978509  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | </network>
	I1210 00:58:25.978519  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | 
	I1210 00:58:25.983410  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | trying to create private KVM network mk-old-k8s-version-094470 192.168.61.0/24...
	I1210 00:58:26.063722  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | private KVM network mk-old-k8s-version-094470 192.168.61.0/24 created
	I1210 00:58:26.063761  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:26.063695  129664 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:58:26.063777  129622 main.go:141] libmachine: (old-k8s-version-094470) Setting up store path in /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470 ...
	I1210 00:58:26.063800  129622 main.go:141] libmachine: (old-k8s-version-094470) Building disk image from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1210 00:58:26.063816  129622 main.go:141] libmachine: (old-k8s-version-094470) Downloading /home/jenkins/minikube-integration/20062-79135/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1210 00:58:26.347854  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:26.347709  129664 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa...
	I1210 00:58:26.619291  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:26.619164  129664 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/old-k8s-version-094470.rawdisk...
	I1210 00:58:26.619319  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | Writing magic tar header
	I1210 00:58:26.619346  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | Writing SSH key tar header
	I1210 00:58:26.619432  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:26.619368  129664 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470 ...
	I1210 00:58:26.619674  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470
	I1210 00:58:26.619718  129622 main.go:141] libmachine: (old-k8s-version-094470) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470 (perms=drwx------)
	I1210 00:58:26.619742  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines
	I1210 00:58:26.619755  129622 main.go:141] libmachine: (old-k8s-version-094470) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines (perms=drwxr-xr-x)
	I1210 00:58:26.619780  129622 main.go:141] libmachine: (old-k8s-version-094470) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube (perms=drwxr-xr-x)
	I1210 00:58:26.619790  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:58:26.619800  129622 main.go:141] libmachine: (old-k8s-version-094470) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135 (perms=drwxrwxr-x)
	I1210 00:58:26.619829  129622 main.go:141] libmachine: (old-k8s-version-094470) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 00:58:26.619841  129622 main.go:141] libmachine: (old-k8s-version-094470) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 00:58:26.619851  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135
	I1210 00:58:26.619859  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1210 00:58:26.619867  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | Checking permissions on dir: /home/jenkins
	I1210 00:58:26.619875  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | Checking permissions on dir: /home
	I1210 00:58:26.619882  129622 main.go:141] libmachine: (old-k8s-version-094470) Creating domain...
	I1210 00:58:26.619887  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | Skipping /home - not owner
	I1210 00:58:27.036255  129622 main.go:141] libmachine: (old-k8s-version-094470) define libvirt domain using xml: 
	I1210 00:58:27.036287  129622 main.go:141] libmachine: (old-k8s-version-094470) <domain type='kvm'>
	I1210 00:58:27.036326  129622 main.go:141] libmachine: (old-k8s-version-094470)   <name>old-k8s-version-094470</name>
	I1210 00:58:27.036348  129622 main.go:141] libmachine: (old-k8s-version-094470)   <memory unit='MiB'>2200</memory>
	I1210 00:58:27.036362  129622 main.go:141] libmachine: (old-k8s-version-094470)   <vcpu>2</vcpu>
	I1210 00:58:27.036372  129622 main.go:141] libmachine: (old-k8s-version-094470)   <features>
	I1210 00:58:27.036381  129622 main.go:141] libmachine: (old-k8s-version-094470)     <acpi/>
	I1210 00:58:27.036396  129622 main.go:141] libmachine: (old-k8s-version-094470)     <apic/>
	I1210 00:58:27.036404  129622 main.go:141] libmachine: (old-k8s-version-094470)     <pae/>
	I1210 00:58:27.036415  129622 main.go:141] libmachine: (old-k8s-version-094470)     
	I1210 00:58:27.036437  129622 main.go:141] libmachine: (old-k8s-version-094470)   </features>
	I1210 00:58:27.036465  129622 main.go:141] libmachine: (old-k8s-version-094470)   <cpu mode='host-passthrough'>
	I1210 00:58:27.036477  129622 main.go:141] libmachine: (old-k8s-version-094470)   
	I1210 00:58:27.036483  129622 main.go:141] libmachine: (old-k8s-version-094470)   </cpu>
	I1210 00:58:27.036500  129622 main.go:141] libmachine: (old-k8s-version-094470)   <os>
	I1210 00:58:27.036514  129622 main.go:141] libmachine: (old-k8s-version-094470)     <type>hvm</type>
	I1210 00:58:27.036524  129622 main.go:141] libmachine: (old-k8s-version-094470)     <boot dev='cdrom'/>
	I1210 00:58:27.036531  129622 main.go:141] libmachine: (old-k8s-version-094470)     <boot dev='hd'/>
	I1210 00:58:27.036540  129622 main.go:141] libmachine: (old-k8s-version-094470)     <bootmenu enable='no'/>
	I1210 00:58:27.036546  129622 main.go:141] libmachine: (old-k8s-version-094470)   </os>
	I1210 00:58:27.036555  129622 main.go:141] libmachine: (old-k8s-version-094470)   <devices>
	I1210 00:58:27.036563  129622 main.go:141] libmachine: (old-k8s-version-094470)     <disk type='file' device='cdrom'>
	I1210 00:58:27.036577  129622 main.go:141] libmachine: (old-k8s-version-094470)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/boot2docker.iso'/>
	I1210 00:58:27.036593  129622 main.go:141] libmachine: (old-k8s-version-094470)       <target dev='hdc' bus='scsi'/>
	I1210 00:58:27.036603  129622 main.go:141] libmachine: (old-k8s-version-094470)       <readonly/>
	I1210 00:58:27.036609  129622 main.go:141] libmachine: (old-k8s-version-094470)     </disk>
	I1210 00:58:27.036618  129622 main.go:141] libmachine: (old-k8s-version-094470)     <disk type='file' device='disk'>
	I1210 00:58:27.036628  129622 main.go:141] libmachine: (old-k8s-version-094470)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1210 00:58:27.036642  129622 main.go:141] libmachine: (old-k8s-version-094470)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/old-k8s-version-094470.rawdisk'/>
	I1210 00:58:27.036650  129622 main.go:141] libmachine: (old-k8s-version-094470)       <target dev='hda' bus='virtio'/>
	I1210 00:58:27.036659  129622 main.go:141] libmachine: (old-k8s-version-094470)     </disk>
	I1210 00:58:27.036766  129622 main.go:141] libmachine: (old-k8s-version-094470)     <interface type='network'>
	I1210 00:58:27.036777  129622 main.go:141] libmachine: (old-k8s-version-094470)       <source network='mk-old-k8s-version-094470'/>
	I1210 00:58:27.036784  129622 main.go:141] libmachine: (old-k8s-version-094470)       <model type='virtio'/>
	I1210 00:58:27.036791  129622 main.go:141] libmachine: (old-k8s-version-094470)     </interface>
	I1210 00:58:27.036799  129622 main.go:141] libmachine: (old-k8s-version-094470)     <interface type='network'>
	I1210 00:58:27.036807  129622 main.go:141] libmachine: (old-k8s-version-094470)       <source network='default'/>
	I1210 00:58:27.036824  129622 main.go:141] libmachine: (old-k8s-version-094470)       <model type='virtio'/>
	I1210 00:58:27.036844  129622 main.go:141] libmachine: (old-k8s-version-094470)     </interface>
	I1210 00:58:27.036860  129622 main.go:141] libmachine: (old-k8s-version-094470)     <serial type='pty'>
	I1210 00:58:27.036869  129622 main.go:141] libmachine: (old-k8s-version-094470)       <target port='0'/>
	I1210 00:58:27.036874  129622 main.go:141] libmachine: (old-k8s-version-094470)     </serial>
	I1210 00:58:27.036880  129622 main.go:141] libmachine: (old-k8s-version-094470)     <console type='pty'>
	I1210 00:58:27.036887  129622 main.go:141] libmachine: (old-k8s-version-094470)       <target type='serial' port='0'/>
	I1210 00:58:27.036896  129622 main.go:141] libmachine: (old-k8s-version-094470)     </console>
	I1210 00:58:27.036904  129622 main.go:141] libmachine: (old-k8s-version-094470)     <rng model='virtio'>
	I1210 00:58:27.036914  129622 main.go:141] libmachine: (old-k8s-version-094470)       <backend model='random'>/dev/random</backend>
	I1210 00:58:27.036920  129622 main.go:141] libmachine: (old-k8s-version-094470)     </rng>
	I1210 00:58:27.036931  129622 main.go:141] libmachine: (old-k8s-version-094470)     
	I1210 00:58:27.036938  129622 main.go:141] libmachine: (old-k8s-version-094470)     
	I1210 00:58:27.036946  129622 main.go:141] libmachine: (old-k8s-version-094470)   </devices>
	I1210 00:58:27.036952  129622 main.go:141] libmachine: (old-k8s-version-094470) </domain>
	I1210 00:58:27.036964  129622 main.go:141] libmachine: (old-k8s-version-094470) 
	I1210 00:58:27.044376  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:69:03:7a in network default
	I1210 00:58:27.045003  129622 main.go:141] libmachine: (old-k8s-version-094470) Ensuring networks are active...
	I1210 00:58:27.045028  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:27.045895  129622 main.go:141] libmachine: (old-k8s-version-094470) Ensuring network default is active
	I1210 00:58:27.046196  129622 main.go:141] libmachine: (old-k8s-version-094470) Ensuring network mk-old-k8s-version-094470 is active
	I1210 00:58:27.046764  129622 main.go:141] libmachine: (old-k8s-version-094470) Getting domain xml...
	I1210 00:58:27.047506  129622 main.go:141] libmachine: (old-k8s-version-094470) Creating domain...
	I1210 00:58:28.289873  129622 main.go:141] libmachine: (old-k8s-version-094470) Waiting to get IP...
	I1210 00:58:28.290835  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:28.291271  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:28.291298  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:28.291250  129664 retry.go:31] will retry after 200.837698ms: waiting for machine to come up
	I1210 00:58:28.493600  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:28.494060  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:28.494089  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:28.494018  129664 retry.go:31] will retry after 273.268694ms: waiting for machine to come up
	I1210 00:58:28.768426  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:28.768967  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:28.769011  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:28.768914  129664 retry.go:31] will retry after 332.226861ms: waiting for machine to come up
	I1210 00:58:29.102323  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:29.102785  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:29.102816  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:29.102742  129664 retry.go:31] will retry after 585.665087ms: waiting for machine to come up
	I1210 00:58:29.690126  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:29.690863  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:29.690892  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:29.690830  129664 retry.go:31] will retry after 601.766804ms: waiting for machine to come up
	I1210 00:58:30.294665  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:30.295116  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:30.295138  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:30.295053  129664 retry.go:31] will retry after 765.321784ms: waiting for machine to come up
	I1210 00:58:31.062519  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:31.062916  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:31.062947  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:31.062859  129664 retry.go:31] will retry after 887.24548ms: waiting for machine to come up
	I1210 00:58:31.951885  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:31.952435  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:31.952468  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:31.952380  129664 retry.go:31] will retry after 1.396905116s: waiting for machine to come up
	I1210 00:58:33.350891  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:33.351284  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:33.351332  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:33.351234  129664 retry.go:31] will retry after 1.265722199s: waiting for machine to come up
	I1210 00:58:34.618695  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:34.619106  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:34.619134  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:34.619059  129664 retry.go:31] will retry after 1.981614225s: waiting for machine to come up
	I1210 00:58:36.602233  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:36.602770  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:36.602795  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:36.602713  129664 retry.go:31] will retry after 2.224825931s: waiting for machine to come up
	I1210 00:58:38.829071  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:38.829597  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:38.829629  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:38.829534  129664 retry.go:31] will retry after 2.685492556s: waiting for machine to come up
	I1210 00:58:41.516677  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:41.517114  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:41.517142  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:41.517068  129664 retry.go:31] will retry after 4.456616812s: waiting for machine to come up
	I1210 00:58:45.975620  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:45.976182  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 00:58:45.976211  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 00:58:45.976130  129664 retry.go:31] will retry after 4.690217508s: waiting for machine to come up
	I1210 00:58:50.672051  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:50.672693  129622 main.go:141] libmachine: (old-k8s-version-094470) Found IP for machine: 192.168.61.11
	I1210 00:58:50.672717  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has current primary IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:50.672725  129622 main.go:141] libmachine: (old-k8s-version-094470) Reserving static IP address...
	I1210 00:58:50.673115  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-094470", mac: "52:54:00:00:f3:52", ip: "192.168.61.11"} in network mk-old-k8s-version-094470
	I1210 00:58:50.746050  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | Getting to WaitForSSH function...
	I1210 00:58:50.746082  129622 main.go:141] libmachine: (old-k8s-version-094470) Reserved static IP address: 192.168.61.11
	I1210 00:58:50.746106  129622 main.go:141] libmachine: (old-k8s-version-094470) Waiting for SSH to be available...
	I1210 00:58:50.748356  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:50.748822  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:minikube Clientid:01:52:54:00:00:f3:52}
	I1210 00:58:50.748856  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:50.749010  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | Using SSH client type: external
	I1210 00:58:50.749038  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa (-rw-------)
	I1210 00:58:50.749103  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 00:58:50.749117  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | About to run SSH command:
	I1210 00:58:50.749129  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | exit 0
	I1210 00:58:50.874435  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | SSH cmd err, output: <nil>: 
	I1210 00:58:50.874765  129622 main.go:141] libmachine: (old-k8s-version-094470) KVM machine creation complete!
	I1210 00:58:50.875102  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetConfigRaw
	I1210 00:58:50.875658  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 00:58:50.875817  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 00:58:50.876014  129622 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1210 00:58:50.876027  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetState
	I1210 00:58:50.877446  129622 main.go:141] libmachine: Detecting operating system of created instance...
	I1210 00:58:50.877464  129622 main.go:141] libmachine: Waiting for SSH to be available...
	I1210 00:58:50.877489  129622 main.go:141] libmachine: Getting to WaitForSSH function...
	I1210 00:58:50.877499  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 00:58:50.879830  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:50.880205  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 00:58:50.880228  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:50.880345  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 00:58:50.880528  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 00:58:50.880721  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 00:58:50.880857  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 00:58:50.881043  129622 main.go:141] libmachine: Using SSH client type: native
	I1210 00:58:50.881253  129622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 00:58:50.881268  129622 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1210 00:58:50.989467  129622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:58:50.989491  129622 main.go:141] libmachine: Detecting the provisioner...
	I1210 00:58:50.989508  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 00:58:50.992384  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:50.992752  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 00:58:50.992774  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:50.992932  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 00:58:50.993119  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 00:58:50.993260  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 00:58:50.993362  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 00:58:50.993506  129622 main.go:141] libmachine: Using SSH client type: native
	I1210 00:58:50.993680  129622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 00:58:50.993691  129622 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1210 00:58:51.102468  129622 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1210 00:58:51.102604  129622 main.go:141] libmachine: found compatible host: buildroot
	I1210 00:58:51.102624  129622 main.go:141] libmachine: Provisioning with buildroot...
	I1210 00:58:51.102639  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 00:58:51.102915  129622 buildroot.go:166] provisioning hostname "old-k8s-version-094470"
	I1210 00:58:51.102949  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 00:58:51.103128  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 00:58:51.105922  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:51.106329  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 00:58:51.106352  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:51.106658  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 00:58:51.106858  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 00:58:51.107034  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 00:58:51.107145  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 00:58:51.107338  129622 main.go:141] libmachine: Using SSH client type: native
	I1210 00:58:51.107576  129622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 00:58:51.107589  129622 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-094470 && echo "old-k8s-version-094470" | sudo tee /etc/hostname
	I1210 00:58:51.227370  129622 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-094470
	
	I1210 00:58:51.227412  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 00:58:51.230175  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:51.230594  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 00:58:51.230624  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:51.230831  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 00:58:51.231022  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 00:58:51.231191  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 00:58:51.231299  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 00:58:51.231505  129622 main.go:141] libmachine: Using SSH client type: native
	I1210 00:58:51.231765  129622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 00:58:51.231793  129622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-094470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-094470/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-094470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 00:58:51.345992  129622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 00:58:51.346032  129622 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 00:58:51.346051  129622 buildroot.go:174] setting up certificates
	I1210 00:58:51.346062  129622 provision.go:84] configureAuth start
	I1210 00:58:51.346071  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 00:58:51.346332  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 00:58:51.349140  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:51.349488  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 00:58:51.349515  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:51.349655  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 00:58:51.351819  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:51.352133  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 00:58:51.352163  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:51.352278  129622 provision.go:143] copyHostCerts
	I1210 00:58:51.352353  129622 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 00:58:51.352382  129622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 00:58:51.352456  129622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 00:58:51.352580  129622 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 00:58:51.352592  129622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 00:58:51.352630  129622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 00:58:51.352709  129622 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 00:58:51.352720  129622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 00:58:51.352753  129622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 00:58:51.352820  129622 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-094470 san=[127.0.0.1 192.168.61.11 localhost minikube old-k8s-version-094470]
	I1210 00:58:51.649267  129622 provision.go:177] copyRemoteCerts
	I1210 00:58:51.649330  129622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 00:58:51.649360  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 00:58:51.652003  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:51.652302  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 00:58:51.652336  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:51.652500  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 00:58:51.652721  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 00:58:51.652869  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 00:58:51.653002  129622 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 00:58:51.735533  129622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 00:58:51.760665  129622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 00:58:51.785202  129622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1210 00:58:51.809006  129622 provision.go:87] duration metric: took 462.931992ms to configureAuth
	I1210 00:58:51.809031  129622 buildroot.go:189] setting minikube options for container-runtime
	I1210 00:58:51.809212  129622 config.go:182] Loaded profile config "old-k8s-version-094470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1210 00:58:51.809302  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 00:58:51.811948  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:51.812303  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 00:58:51.812341  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:51.812475  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 00:58:51.812670  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 00:58:51.812819  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 00:58:51.812950  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 00:58:51.813089  129622 main.go:141] libmachine: Using SSH client type: native
	I1210 00:58:51.813251  129622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 00:58:51.813268  129622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 00:58:52.033418  129622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 00:58:52.033444  129622 main.go:141] libmachine: Checking connection to Docker...
	I1210 00:58:52.033452  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetURL
	I1210 00:58:52.034705  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | Using libvirt version 6000000
	I1210 00:58:52.036804  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:52.037153  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 00:58:52.037182  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:52.037348  129622 main.go:141] libmachine: Docker is up and running!
	I1210 00:58:52.037365  129622 main.go:141] libmachine: Reticulating splines...
	I1210 00:58:52.037373  129622 client.go:171] duration metric: took 26.065108067s to LocalClient.Create
	I1210 00:58:52.037395  129622 start.go:167] duration metric: took 26.065176475s to libmachine.API.Create "old-k8s-version-094470"
	I1210 00:58:52.037415  129622 start.go:293] postStartSetup for "old-k8s-version-094470" (driver="kvm2")
	I1210 00:58:52.037426  129622 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 00:58:52.037455  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 00:58:52.037707  129622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 00:58:52.037730  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 00:58:52.040093  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:52.040365  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 00:58:52.040386  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:52.040551  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 00:58:52.040745  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 00:58:52.040880  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 00:58:52.041030  129622 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 00:58:52.123684  129622 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 00:58:52.127416  129622 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 00:58:52.127437  129622 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 00:58:52.127489  129622 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 00:58:52.127573  129622 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 00:58:52.127664  129622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 00:58:52.136020  129622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:58:52.156689  129622 start.go:296] duration metric: took 119.260053ms for postStartSetup
	I1210 00:58:52.156743  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetConfigRaw
	I1210 00:58:52.157413  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 00:58:52.159917  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:52.160221  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 00:58:52.160250  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:52.160460  129622 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/config.json ...
	I1210 00:58:52.160620  129622 start.go:128] duration metric: took 26.205932442s to createHost
	I1210 00:58:52.160641  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 00:58:52.163000  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:52.163350  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 00:58:52.163395  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:52.163508  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 00:58:52.163689  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 00:58:52.163836  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 00:58:52.163964  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 00:58:52.164117  129622 main.go:141] libmachine: Using SSH client type: native
	I1210 00:58:52.164318  129622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 00:58:52.164331  129622 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 00:58:52.274493  129622 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792332.255202789
	
	I1210 00:58:52.274518  129622 fix.go:216] guest clock: 1733792332.255202789
	I1210 00:58:52.274528  129622 fix.go:229] Guest: 2024-12-10 00:58:52.255202789 +0000 UTC Remote: 2024-12-10 00:58:52.160631355 +0000 UTC m=+26.321368189 (delta=94.571434ms)
	I1210 00:58:52.274587  129622 fix.go:200] guest clock delta is within tolerance: 94.571434ms
	I1210 00:58:52.274598  129622 start.go:83] releasing machines lock for "old-k8s-version-094470", held for 26.319990748s
	I1210 00:58:52.274629  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 00:58:52.274951  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 00:58:52.277771  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:52.278313  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 00:58:52.278354  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:52.278577  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 00:58:52.279090  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 00:58:52.279292  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 00:58:52.279394  129622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 00:58:52.279450  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 00:58:52.279547  129622 ssh_runner.go:195] Run: cat /version.json
	I1210 00:58:52.279578  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 00:58:52.282293  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:52.282445  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:52.282643  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 00:58:52.282668  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:52.282911  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 00:58:52.282918  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 00:58:52.282974  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:52.283203  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 00:58:52.283209  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 00:58:52.283383  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 00:58:52.283416  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 00:58:52.283506  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 00:58:52.283659  129622 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 00:58:52.283695  129622 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 00:58:52.363456  129622 ssh_runner.go:195] Run: systemctl --version
	I1210 00:58:52.390335  129622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 00:58:52.550145  129622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 00:58:52.556073  129622 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 00:58:52.556150  129622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 00:58:52.578409  129622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 00:58:52.578431  129622 start.go:495] detecting cgroup driver to use...
	I1210 00:58:52.578492  129622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 00:58:52.594316  129622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 00:58:52.607160  129622 docker.go:217] disabling cri-docker service (if available) ...
	I1210 00:58:52.607214  129622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 00:58:52.619378  129622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 00:58:52.631650  129622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 00:58:52.740524  129622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 00:58:52.903137  129622 docker.go:233] disabling docker service ...
	I1210 00:58:52.903211  129622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 00:58:52.920287  129622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 00:58:52.934866  129622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 00:58:53.069504  129622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 00:58:53.185998  129622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 00:58:53.198715  129622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 00:58:53.215082  129622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1210 00:58:53.215137  129622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:58:53.224564  129622 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 00:58:53.224623  129622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:58:53.233868  129622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:58:53.242871  129622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 00:58:53.253103  129622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 00:58:53.264359  129622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 00:58:53.274374  129622 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 00:58:53.274419  129622 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 00:58:53.288520  129622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 00:58:53.298472  129622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:58:53.440132  129622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 00:58:53.533270  129622 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 00:58:53.533358  129622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 00:58:53.537681  129622 start.go:563] Will wait 60s for crictl version
	I1210 00:58:53.537744  129622 ssh_runner.go:195] Run: which crictl
	I1210 00:58:53.541381  129622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 00:58:53.579557  129622 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 00:58:53.579645  129622 ssh_runner.go:195] Run: crio --version
	I1210 00:58:53.617294  129622 ssh_runner.go:195] Run: crio --version
	I1210 00:58:53.648480  129622 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1210 00:58:53.649742  129622 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 00:58:53.652854  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:53.653244  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 00:58:53.653274  129622 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 00:58:53.653490  129622 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1210 00:58:53.657411  129622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:58:53.669051  129622 kubeadm.go:883] updating cluster {Name:old-k8s-version-094470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 00:58:53.669174  129622 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1210 00:58:53.669220  129622 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:58:53.700247  129622 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1210 00:58:53.700309  129622 ssh_runner.go:195] Run: which lz4
	I1210 00:58:53.703891  129622 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 00:58:53.707526  129622 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 00:58:53.707556  129622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1210 00:58:55.114764  129622 crio.go:462] duration metric: took 1.410897754s to copy over tarball
	I1210 00:58:55.114839  129622 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 00:58:57.657923  129622 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.543045747s)
	I1210 00:58:57.657971  129622 crio.go:469] duration metric: took 2.543174707s to extract the tarball
	I1210 00:58:57.657983  129622 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 00:58:57.701975  129622 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 00:58:57.746933  129622 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1210 00:58:57.746967  129622 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 00:58:57.747075  129622 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:58:57.747109  129622 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1210 00:58:57.747112  129622 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 00:58:57.747092  129622 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 00:58:57.747143  129622 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 00:58:57.747179  129622 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1210 00:58:57.747186  129622 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 00:58:57.747158  129622 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1210 00:58:57.748525  129622 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 00:58:57.748582  129622 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 00:58:57.748522  129622 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1210 00:58:57.748739  129622 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1210 00:58:57.748741  129622 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 00:58:57.748739  129622 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1210 00:58:57.748765  129622 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 00:58:57.748740  129622 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:58:57.888744  129622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 00:58:57.895764  129622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1210 00:58:57.898486  129622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1210 00:58:57.902900  129622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1210 00:58:57.904657  129622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1210 00:58:57.916686  129622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1210 00:58:57.947795  129622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1210 00:58:57.991639  129622 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1210 00:58:57.991701  129622 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 00:58:57.991701  129622 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1210 00:58:57.991738  129622 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1210 00:58:57.991766  129622 ssh_runner.go:195] Run: which crictl
	I1210 00:58:57.991853  129622 ssh_runner.go:195] Run: which crictl
	I1210 00:58:58.058442  129622 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1210 00:58:58.058470  129622 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1210 00:58:58.058492  129622 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 00:58:58.058503  129622 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 00:58:58.058530  129622 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1210 00:58:58.058547  129622 ssh_runner.go:195] Run: which crictl
	I1210 00:58:58.058552  129622 ssh_runner.go:195] Run: which crictl
	I1210 00:58:58.058555  129622 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1210 00:58:58.058470  129622 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1210 00:58:58.058607  129622 ssh_runner.go:195] Run: which crictl
	I1210 00:58:58.058611  129622 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 00:58:58.058652  129622 ssh_runner.go:195] Run: which crictl
	I1210 00:58:58.058738  129622 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1210 00:58:58.058766  129622 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1210 00:58:58.058769  129622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 00:58:58.058805  129622 ssh_runner.go:195] Run: which crictl
	I1210 00:58:58.058827  129622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 00:58:58.072248  129622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 00:58:58.072266  129622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 00:58:58.072316  129622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 00:58:58.072350  129622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 00:58:58.156692  129622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 00:58:58.156694  129622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 00:58:58.160203  129622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 00:58:58.189224  129622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 00:58:58.189251  129622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 00:58:58.189319  129622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 00:58:58.189351  129622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 00:58:58.286816  129622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 00:58:58.286875  129622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 00:58:58.287575  129622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 00:58:58.340064  129622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 00:58:58.340119  129622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 00:58:58.340149  129622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 00:58:58.340211  129622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 00:58:58.391931  129622 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 00:58:58.431577  129622 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1210 00:58:58.431590  129622 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1210 00:58:58.477959  129622 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1210 00:58:58.477968  129622 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1210 00:58:58.486333  129622 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1210 00:58:58.486476  129622 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1210 00:58:58.497567  129622 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1210 00:58:58.696116  129622 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 00:58:58.833832  129622 cache_images.go:92] duration metric: took 1.08684119s to LoadCachedImages
	W1210 00:58:58.833926  129622 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I1210 00:58:58.833947  129622 kubeadm.go:934] updating node { 192.168.61.11 8443 v1.20.0 crio true true} ...
	I1210 00:58:58.834076  129622 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-094470 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 00:58:58.834183  129622 ssh_runner.go:195] Run: crio config
	I1210 00:58:58.880334  129622 cni.go:84] Creating CNI manager for ""
	I1210 00:58:58.880366  129622 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 00:58:58.880383  129622 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 00:58:58.880413  129622 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.11 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-094470 NodeName:old-k8s-version-094470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1210 00:58:58.880593  129622 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-094470"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 00:58:58.880668  129622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1210 00:58:58.890436  129622 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 00:58:58.890500  129622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 00:58:58.899441  129622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1210 00:58:58.915075  129622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 00:58:58.931349  129622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1210 00:58:58.947537  129622 ssh_runner.go:195] Run: grep 192.168.61.11	control-plane.minikube.internal$ /etc/hosts
	I1210 00:58:58.950978  129622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 00:58:58.961831  129622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 00:58:59.075044  129622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 00:58:59.091583  129622 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470 for IP: 192.168.61.11
	I1210 00:58:59.091607  129622 certs.go:194] generating shared ca certs ...
	I1210 00:58:59.091643  129622 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:58:59.091803  129622 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 00:58:59.091842  129622 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 00:58:59.091852  129622 certs.go:256] generating profile certs ...
	I1210 00:58:59.091905  129622 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.key
	I1210 00:58:59.091925  129622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.crt with IP's: []
	I1210 00:58:59.253138  129622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.crt ...
	I1210 00:58:59.253175  129622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.crt: {Name:mkfea8b089830d0661ca909df370bad9b580ddf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:58:59.253387  129622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.key ...
	I1210 00:58:59.253408  129622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.key: {Name:mkf37a6e7e16d3795aba8e104eefc6f3b28e6757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:58:59.253524  129622 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.key.11e7a196
	I1210 00:58:59.253546  129622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.crt.11e7a196 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.11]
	I1210 00:58:59.473419  129622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.crt.11e7a196 ...
	I1210 00:58:59.473458  129622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.crt.11e7a196: {Name:mk90958795f7592b55b00459b59cf901146105c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:58:59.473669  129622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.key.11e7a196 ...
	I1210 00:58:59.473690  129622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.key.11e7a196: {Name:mkf258c85bf34279fadd456000e3305c00dd6963 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:58:59.473799  129622 certs.go:381] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.crt.11e7a196 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.crt
	I1210 00:58:59.473884  129622 certs.go:385] copying /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.key.11e7a196 -> /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.key
	I1210 00:58:59.473947  129622 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.key
	I1210 00:58:59.473967  129622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.crt with IP's: []
	I1210 00:58:59.855097  129622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.crt ...
	I1210 00:58:59.855129  129622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.crt: {Name:mk7eafa08282c6744fe62aba01529c983d203086 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:58:59.855327  129622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.key ...
	I1210 00:58:59.855348  129622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.key: {Name:mk31f15e2b5b396e11facd76390fcc6ed91718bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 00:58:59.855581  129622 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 00:58:59.855625  129622 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 00:58:59.855638  129622 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 00:58:59.855666  129622 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 00:58:59.855693  129622 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 00:58:59.855718  129622 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 00:58:59.855797  129622 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 00:58:59.856373  129622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 00:58:59.881529  129622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 00:58:59.905204  129622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 00:58:59.931574  129622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 00:58:59.965702  129622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1210 00:58:59.992032  129622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 00:59:00.013987  129622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 00:59:00.035204  129622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 00:59:00.056293  129622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 00:59:00.080014  129622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 00:59:00.106205  129622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 00:59:00.129958  129622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 00:59:00.144756  129622 ssh_runner.go:195] Run: openssl version
	I1210 00:59:00.150188  129622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 00:59:00.159341  129622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 00:59:00.163249  129622 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 00:59:00.163302  129622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 00:59:00.168489  129622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 00:59:00.177707  129622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 00:59:00.187008  129622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 00:59:00.190934  129622 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 00:59:00.190989  129622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 00:59:00.195901  129622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 00:59:00.205018  129622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 00:59:00.214826  129622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:59:00.218877  129622 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:59:00.218927  129622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 00:59:00.224204  129622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 00:59:00.233720  129622 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 00:59:00.237458  129622 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1210 00:59:00.237514  129622 kubeadm.go:392] StartCluster: {Name:old-k8s-version-094470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:59:00.237615  129622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 00:59:00.237665  129622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 00:59:00.275747  129622 cri.go:89] found id: ""
	I1210 00:59:00.275822  129622 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 00:59:00.284785  129622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 00:59:00.293399  129622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 00:59:00.301775  129622 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 00:59:00.301792  129622 kubeadm.go:157] found existing configuration files:
	
	I1210 00:59:00.301830  129622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 00:59:00.309641  129622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 00:59:00.309688  129622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 00:59:00.317721  129622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 00:59:00.325598  129622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 00:59:00.325635  129622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 00:59:00.333665  129622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 00:59:00.341479  129622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 00:59:00.341549  129622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 00:59:00.349554  129622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 00:59:00.357281  129622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 00:59:00.357327  129622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 00:59:00.365322  129622 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 00:59:00.471821  129622 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 00:59:00.471899  129622 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 00:59:00.619599  129622 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 00:59:00.619764  129622 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 00:59:00.619928  129622 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 00:59:00.836128  129622 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 00:59:00.936731  129622 out.go:235]   - Generating certificates and keys ...
	I1210 00:59:00.936883  129622 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 00:59:00.936982  129622 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 00:59:00.937082  129622 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1210 00:59:01.018898  129622 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1210 00:59:01.128590  129622 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1210 00:59:01.543470  129622 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1210 00:59:01.795090  129622 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1210 00:59:01.795296  129622 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-094470] and IPs [192.168.61.11 127.0.0.1 ::1]
	I1210 00:59:01.890330  129622 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1210 00:59:01.890577  129622 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-094470] and IPs [192.168.61.11 127.0.0.1 ::1]
	I1210 00:59:02.164250  129622 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1210 00:59:02.393321  129622 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1210 00:59:02.485647  129622 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1210 00:59:02.485936  129622 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 00:59:02.665053  129622 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 00:59:02.731004  129622 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 00:59:02.794360  129622 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 00:59:02.974516  129622 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 00:59:02.992003  129622 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 00:59:02.993451  129622 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 00:59:02.993558  129622 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 00:59:03.127246  129622 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 00:59:03.129338  129622 out.go:235]   - Booting up control plane ...
	I1210 00:59:03.129459  129622 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 00:59:03.136769  129622 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 00:59:03.138643  129622 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 00:59:03.151727  129622 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 00:59:03.156864  129622 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 00:59:43.152338  129622 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 00:59:43.153112  129622 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:59:43.153366  129622 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:59:48.154288  129622 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:59:48.154610  129622 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 00:59:58.154548  129622 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 00:59:58.154826  129622 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:00:18.155282  129622 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:00:18.155560  129622 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:00:58.158406  129622 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:00:58.158704  129622 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:00:58.158721  129622 kubeadm.go:310] 
	I1210 01:00:58.158788  129622 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 01:00:58.158881  129622 kubeadm.go:310] 		timed out waiting for the condition
	I1210 01:00:58.158906  129622 kubeadm.go:310] 
	I1210 01:00:58.158955  129622 kubeadm.go:310] 	This error is likely caused by:
	I1210 01:00:58.159005  129622 kubeadm.go:310] 		- The kubelet is not running
	I1210 01:00:58.159152  129622 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 01:00:58.159173  129622 kubeadm.go:310] 
	I1210 01:00:58.159341  129622 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 01:00:58.159406  129622 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 01:00:58.159462  129622 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 01:00:58.159472  129622 kubeadm.go:310] 
	I1210 01:00:58.159631  129622 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 01:00:58.159769  129622 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 01:00:58.159779  129622 kubeadm.go:310] 
	I1210 01:00:58.159937  129622 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 01:00:58.160101  129622 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 01:00:58.160212  129622 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 01:00:58.160334  129622 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 01:00:58.160346  129622 kubeadm.go:310] 
	I1210 01:00:58.160775  129622 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:00:58.160897  129622 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 01:00:58.161016  129622 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1210 01:00:58.161176  129622 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-094470] and IPs [192.168.61.11 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-094470] and IPs [192.168.61.11 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-094470] and IPs [192.168.61.11 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-094470] and IPs [192.168.61.11 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 01:00:58.161221  129622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:00:58.611921  129622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:00:58.626277  129622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:00:58.635093  129622 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:00:58.635115  129622 kubeadm.go:157] found existing configuration files:
	
	I1210 01:00:58.635166  129622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:00:58.643251  129622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:00:58.643298  129622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:00:58.651593  129622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:00:58.659880  129622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:00:58.659917  129622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:00:58.668081  129622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:00:58.675734  129622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:00:58.675781  129622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:00:58.683651  129622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:00:58.691433  129622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:00:58.691469  129622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:00:58.699302  129622 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:00:58.885670  129622 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:02:55.311189  129622 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 01:02:55.311333  129622 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1210 01:02:55.312234  129622 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 01:02:55.312286  129622 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:02:55.312348  129622 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:02:55.312435  129622 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:02:55.312515  129622 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 01:02:55.312576  129622 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:02:55.314206  129622 out.go:235]   - Generating certificates and keys ...
	I1210 01:02:55.314305  129622 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:02:55.314392  129622 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:02:55.314501  129622 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:02:55.314626  129622 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:02:55.314724  129622 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:02:55.314778  129622 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:02:55.314828  129622 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:02:55.314911  129622 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:02:55.314977  129622 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:02:55.315037  129622 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:02:55.315069  129622 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:02:55.315116  129622 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:02:55.315157  129622 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:02:55.315213  129622 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:02:55.315293  129622 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:02:55.315367  129622 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:02:55.315450  129622 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:02:55.315517  129622 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:02:55.315548  129622 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:02:55.315600  129622 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:02:55.316911  129622 out.go:235]   - Booting up control plane ...
	I1210 01:02:55.316982  129622 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:02:55.317042  129622 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:02:55.317097  129622 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:02:55.317160  129622 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:02:55.317302  129622 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 01:02:55.317364  129622 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 01:02:55.317479  129622 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:02:55.317702  129622 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:02:55.317796  129622 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:02:55.317992  129622 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:02:55.318056  129622 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:02:55.318224  129622 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:02:55.318295  129622 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:02:55.318466  129622 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:02:55.318528  129622 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:02:55.318711  129622 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:02:55.318719  129622 kubeadm.go:310] 
	I1210 01:02:55.318754  129622 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 01:02:55.318790  129622 kubeadm.go:310] 		timed out waiting for the condition
	I1210 01:02:55.318796  129622 kubeadm.go:310] 
	I1210 01:02:55.318825  129622 kubeadm.go:310] 	This error is likely caused by:
	I1210 01:02:55.318859  129622 kubeadm.go:310] 		- The kubelet is not running
	I1210 01:02:55.318962  129622 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 01:02:55.318970  129622 kubeadm.go:310] 
	I1210 01:02:55.319096  129622 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 01:02:55.319147  129622 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 01:02:55.319187  129622 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 01:02:55.319201  129622 kubeadm.go:310] 
	I1210 01:02:55.319339  129622 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 01:02:55.319461  129622 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 01:02:55.319475  129622 kubeadm.go:310] 
	I1210 01:02:55.319628  129622 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 01:02:55.319741  129622 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 01:02:55.319843  129622 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 01:02:55.319994  129622 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 01:02:55.320083  129622 kubeadm.go:310] 
	I1210 01:02:55.320107  129622 kubeadm.go:394] duration metric: took 3m55.08258444s to StartCluster
	I1210 01:02:55.320176  129622 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:02:55.320246  129622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:02:55.359725  129622 cri.go:89] found id: ""
	I1210 01:02:55.359754  129622 logs.go:282] 0 containers: []
	W1210 01:02:55.359762  129622 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:02:55.359772  129622 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:02:55.359839  129622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:02:55.391588  129622 cri.go:89] found id: ""
	I1210 01:02:55.391615  129622 logs.go:282] 0 containers: []
	W1210 01:02:55.391624  129622 logs.go:284] No container was found matching "etcd"
	I1210 01:02:55.391630  129622 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:02:55.391682  129622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:02:55.422537  129622 cri.go:89] found id: ""
	I1210 01:02:55.422576  129622 logs.go:282] 0 containers: []
	W1210 01:02:55.422586  129622 logs.go:284] No container was found matching "coredns"
	I1210 01:02:55.422593  129622 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:02:55.422661  129622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:02:55.453313  129622 cri.go:89] found id: ""
	I1210 01:02:55.453350  129622 logs.go:282] 0 containers: []
	W1210 01:02:55.453359  129622 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:02:55.453366  129622 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:02:55.453422  129622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:02:55.489244  129622 cri.go:89] found id: ""
	I1210 01:02:55.489267  129622 logs.go:282] 0 containers: []
	W1210 01:02:55.489274  129622 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:02:55.489280  129622 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:02:55.489325  129622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:02:55.519098  129622 cri.go:89] found id: ""
	I1210 01:02:55.519120  129622 logs.go:282] 0 containers: []
	W1210 01:02:55.519128  129622 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:02:55.519134  129622 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:02:55.519184  129622 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:02:55.549599  129622 cri.go:89] found id: ""
	I1210 01:02:55.549628  129622 logs.go:282] 0 containers: []
	W1210 01:02:55.549639  129622 logs.go:284] No container was found matching "kindnet"
	I1210 01:02:55.549656  129622 logs.go:123] Gathering logs for kubelet ...
	I1210 01:02:55.549671  129622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:02:55.599480  129622 logs.go:123] Gathering logs for dmesg ...
	I1210 01:02:55.599516  129622 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:02:55.611720  129622 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:02:55.611754  129622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:02:55.754542  129622 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:02:55.754597  129622 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:02:55.754615  129622 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:02:55.867804  129622 logs.go:123] Gathering logs for container status ...
	I1210 01:02:55.867839  129622 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 01:02:55.903699  129622 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1210 01:02:55.903755  129622 out.go:270] * 
	* 
	W1210 01:02:55.903884  129622 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 01:02:55.903914  129622 out.go:270] * 
	* 
	W1210 01:02:55.904744  129622 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 01:02:55.907494  129622 out.go:201] 
	W1210 01:02:55.908799  129622 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 01:02:55.908833  129622 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 01:02:55.908857  129622 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 01:02:55.910135  129622 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-094470 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094470 -n old-k8s-version-094470
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094470 -n old-k8s-version-094470: exit status 6 (231.238024ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 01:02:56.191269  132238 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-094470" does not appear in /home/jenkins/minikube-integration/20062-79135/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-094470" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (270.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-584179 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-584179 --alsologtostderr -v=3: exit status 82 (2m0.488143591s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-584179"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 01:00:50.753746  131467 out.go:345] Setting OutFile to fd 1 ...
	I1210 01:00:50.754032  131467 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:00:50.754044  131467 out.go:358] Setting ErrFile to fd 2...
	I1210 01:00:50.754048  131467 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:00:50.754238  131467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 01:00:50.754485  131467 out.go:352] Setting JSON to false
	I1210 01:00:50.754586  131467 mustload.go:65] Loading cluster: no-preload-584179
	I1210 01:00:50.754992  131467 config.go:182] Loaded profile config "no-preload-584179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:00:50.755090  131467 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/config.json ...
	I1210 01:00:50.755267  131467 mustload.go:65] Loading cluster: no-preload-584179
	I1210 01:00:50.755376  131467 config.go:182] Loaded profile config "no-preload-584179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:00:50.755406  131467 stop.go:39] StopHost: no-preload-584179
	I1210 01:00:50.755835  131467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:00:50.755889  131467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:00:50.770733  131467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37705
	I1210 01:00:50.771163  131467 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:00:50.771791  131467 main.go:141] libmachine: Using API Version  1
	I1210 01:00:50.771817  131467 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:00:50.772206  131467 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:00:50.774340  131467 out.go:177] * Stopping node "no-preload-584179"  ...
	I1210 01:00:50.775598  131467 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1210 01:00:50.775633  131467 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:00:50.775844  131467 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1210 01:00:50.775878  131467 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:00:50.778535  131467 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:00:50.778989  131467 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 01:59:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:00:50.779019  131467 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:00:50.779174  131467 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:00:50.779345  131467 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:00:50.779491  131467 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:00:50.779652  131467 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:00:50.863180  131467 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1210 01:00:50.920270  131467 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1210 01:00:50.977830  131467 main.go:141] libmachine: Stopping "no-preload-584179"...
	I1210 01:00:50.977862  131467 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:00:50.979563  131467 main.go:141] libmachine: (no-preload-584179) Calling .Stop
	I1210 01:00:50.983386  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 0/120
	I1210 01:00:51.984750  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 1/120
	I1210 01:00:52.986543  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 2/120
	I1210 01:00:53.988266  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 3/120
	I1210 01:00:54.989966  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 4/120
	I1210 01:00:55.992160  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 5/120
	I1210 01:00:56.993354  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 6/120
	I1210 01:00:57.994709  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 7/120
	I1210 01:00:58.997504  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 8/120
	I1210 01:00:59.998859  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 9/120
	I1210 01:01:01.000956  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 10/120
	I1210 01:01:02.002298  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 11/120
	I1210 01:01:03.003757  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 12/120
	I1210 01:01:04.005157  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 13/120
	I1210 01:01:05.006369  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 14/120
	I1210 01:01:06.008173  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 15/120
	I1210 01:01:07.010983  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 16/120
	I1210 01:01:08.013084  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 17/120
	I1210 01:01:09.015487  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 18/120
	I1210 01:01:10.017797  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 19/120
	I1210 01:01:11.019961  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 20/120
	I1210 01:01:12.021859  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 21/120
	I1210 01:01:13.023161  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 22/120
	I1210 01:01:14.024835  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 23/120
	I1210 01:01:15.027307  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 24/120
	I1210 01:01:16.029350  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 25/120
	I1210 01:01:17.030764  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 26/120
	I1210 01:01:18.032966  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 27/120
	I1210 01:01:19.034235  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 28/120
	I1210 01:01:20.035566  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 29/120
	I1210 01:01:21.037737  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 30/120
	I1210 01:01:22.039150  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 31/120
	I1210 01:01:23.041078  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 32/120
	I1210 01:01:24.042544  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 33/120
	I1210 01:01:25.043864  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 34/120
	I1210 01:01:26.045787  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 35/120
	I1210 01:01:27.047315  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 36/120
	I1210 01:01:28.048793  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 37/120
	I1210 01:01:29.049930  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 38/120
	I1210 01:01:30.052329  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 39/120
	I1210 01:01:31.054480  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 40/120
	I1210 01:01:32.055962  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 41/120
	I1210 01:01:33.057271  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 42/120
	I1210 01:01:34.058898  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 43/120
	I1210 01:01:35.061047  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 44/120
	I1210 01:01:36.062966  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 45/120
	I1210 01:01:37.064588  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 46/120
	I1210 01:01:38.065896  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 47/120
	I1210 01:01:39.067108  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 48/120
	I1210 01:01:40.069012  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 49/120
	I1210 01:01:41.071138  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 50/120
	I1210 01:01:42.072609  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 51/120
	I1210 01:01:43.074227  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 52/120
	I1210 01:01:44.075439  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 53/120
	I1210 01:01:45.076730  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 54/120
	I1210 01:01:46.079039  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 55/120
	I1210 01:01:47.080363  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 56/120
	I1210 01:01:48.081729  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 57/120
	I1210 01:01:49.083030  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 58/120
	I1210 01:01:50.085168  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 59/120
	I1210 01:01:51.087317  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 60/120
	I1210 01:01:52.088785  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 61/120
	I1210 01:01:53.091007  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 62/120
	I1210 01:01:54.092863  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 63/120
	I1210 01:01:55.095058  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 64/120
	I1210 01:01:56.096955  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 65/120
	I1210 01:01:57.098278  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 66/120
	I1210 01:01:58.099557  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 67/120
	I1210 01:01:59.101214  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 68/120
	I1210 01:02:00.102470  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 69/120
	I1210 01:02:01.104547  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 70/120
	I1210 01:02:02.105999  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 71/120
	I1210 01:02:03.107273  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 72/120
	I1210 01:02:04.109235  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 73/120
	I1210 01:02:05.111135  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 74/120
	I1210 01:02:06.113042  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 75/120
	I1210 01:02:07.115062  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 76/120
	I1210 01:02:08.117056  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 77/120
	I1210 01:02:09.118998  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 78/120
	I1210 01:02:10.120958  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 79/120
	I1210 01:02:11.122823  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 80/120
	I1210 01:02:12.123968  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 81/120
	I1210 01:02:13.125379  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 82/120
	I1210 01:02:14.126495  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 83/120
	I1210 01:02:15.127920  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 84/120
	I1210 01:02:16.130064  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 85/120
	I1210 01:02:17.131316  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 86/120
	I1210 01:02:18.132695  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 87/120
	I1210 01:02:19.133770  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 88/120
	I1210 01:02:20.135221  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 89/120
	I1210 01:02:21.137359  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 90/120
	I1210 01:02:22.138524  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 91/120
	I1210 01:02:23.140083  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 92/120
	I1210 01:02:24.141166  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 93/120
	I1210 01:02:25.142465  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 94/120
	I1210 01:02:26.144654  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 95/120
	I1210 01:02:27.145778  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 96/120
	I1210 01:02:28.146963  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 97/120
	I1210 01:02:29.148747  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 98/120
	I1210 01:02:30.149959  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 99/120
	I1210 01:02:31.152111  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 100/120
	I1210 01:02:32.154142  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 101/120
	I1210 01:02:33.155593  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 102/120
	I1210 01:02:34.156839  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 103/120
	I1210 01:02:35.158229  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 104/120
	I1210 01:02:36.160070  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 105/120
	I1210 01:02:37.161616  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 106/120
	I1210 01:02:38.162819  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 107/120
	I1210 01:02:39.164826  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 108/120
	I1210 01:02:40.166106  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 109/120
	I1210 01:02:41.167938  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 110/120
	I1210 01:02:42.169234  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 111/120
	I1210 01:02:43.170544  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 112/120
	I1210 01:02:44.171688  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 113/120
	I1210 01:02:45.172931  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 114/120
	I1210 01:02:46.174910  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 115/120
	I1210 01:02:47.176307  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 116/120
	I1210 01:02:48.177530  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 117/120
	I1210 01:02:49.178892  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 118/120
	I1210 01:02:50.180194  131467 main.go:141] libmachine: (no-preload-584179) Waiting for machine to stop 119/120
	I1210 01:02:51.181542  131467 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1210 01:02:51.181611  131467 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1210 01:02:51.183362  131467 out.go:201] 
	W1210 01:02:51.184662  131467 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1210 01:02:51.184690  131467 out.go:270] * 
	* 
	W1210 01:02:51.188278  131467 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 01:02:51.189561  131467 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-584179 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-584179 -n no-preload-584179
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-584179 -n no-preload-584179: exit status 3 (18.599411583s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 01:03:09.790918  132182 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.169:22: connect: no route to host
	E1210 01:03:09.790942  132182 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.169:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-584179" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-274758 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-274758 --alsologtostderr -v=3: exit status 82 (2m0.520611002s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-274758"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 01:00:55.477981  131556 out.go:345] Setting OutFile to fd 1 ...
	I1210 01:00:55.478275  131556 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:00:55.478286  131556 out.go:358] Setting ErrFile to fd 2...
	I1210 01:00:55.478294  131556 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:00:55.478497  131556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 01:00:55.478780  131556 out.go:352] Setting JSON to false
	I1210 01:00:55.478893  131556 mustload.go:65] Loading cluster: embed-certs-274758
	I1210 01:00:55.479305  131556 config.go:182] Loaded profile config "embed-certs-274758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:00:55.479395  131556 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/config.json ...
	I1210 01:00:55.479621  131556 mustload.go:65] Loading cluster: embed-certs-274758
	I1210 01:00:55.479754  131556 config.go:182] Loaded profile config "embed-certs-274758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:00:55.479811  131556 stop.go:39] StopHost: embed-certs-274758
	I1210 01:00:55.480259  131556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:00:55.480323  131556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:00:55.495966  131556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40833
	I1210 01:00:55.496428  131556 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:00:55.497149  131556 main.go:141] libmachine: Using API Version  1
	I1210 01:00:55.497176  131556 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:00:55.497662  131556 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:00:55.500075  131556 out.go:177] * Stopping node "embed-certs-274758"  ...
	I1210 01:00:55.501265  131556 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1210 01:00:55.501299  131556 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:00:55.501534  131556 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1210 01:00:55.501562  131556 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:00:55.504697  131556 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:00:55.505120  131556 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 01:59:35 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:00:55.505160  131556 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:00:55.505397  131556 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:00:55.505607  131556 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:00:55.505741  131556 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:00:55.505908  131556 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:00:55.616916  131556 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1210 01:00:55.700601  131556 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1210 01:00:55.755563  131556 main.go:141] libmachine: Stopping "embed-certs-274758"...
	I1210 01:00:55.755610  131556 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:00:55.757356  131556 main.go:141] libmachine: (embed-certs-274758) Calling .Stop
	I1210 01:00:55.760858  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 0/120
	I1210 01:00:56.762476  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 1/120
	I1210 01:00:57.763874  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 2/120
	I1210 01:00:58.765260  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 3/120
	I1210 01:00:59.766788  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 4/120
	I1210 01:01:00.768736  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 5/120
	I1210 01:01:01.769897  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 6/120
	I1210 01:01:02.771574  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 7/120
	I1210 01:01:03.772994  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 8/120
	I1210 01:01:04.774210  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 9/120
	I1210 01:01:05.775656  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 10/120
	I1210 01:01:06.777151  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 11/120
	I1210 01:01:07.778436  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 12/120
	I1210 01:01:08.779998  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 13/120
	I1210 01:01:09.781678  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 14/120
	I1210 01:01:10.783364  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 15/120
	I1210 01:01:11.784915  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 16/120
	I1210 01:01:12.786400  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 17/120
	I1210 01:01:13.787919  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 18/120
	I1210 01:01:14.789471  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 19/120
	I1210 01:01:15.791891  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 20/120
	I1210 01:01:16.793245  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 21/120
	I1210 01:01:17.794631  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 22/120
	I1210 01:01:18.795906  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 23/120
	I1210 01:01:19.797476  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 24/120
	I1210 01:01:20.799532  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 25/120
	I1210 01:01:21.801021  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 26/120
	I1210 01:01:22.802304  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 27/120
	I1210 01:01:23.803715  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 28/120
	I1210 01:01:24.804912  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 29/120
	I1210 01:01:25.807511  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 30/120
	I1210 01:01:26.808812  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 31/120
	I1210 01:01:27.810263  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 32/120
	I1210 01:01:28.811669  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 33/120
	I1210 01:01:29.812973  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 34/120
	I1210 01:01:30.814910  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 35/120
	I1210 01:01:31.816941  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 36/120
	I1210 01:01:32.818503  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 37/120
	I1210 01:01:33.819796  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 38/120
	I1210 01:01:34.821245  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 39/120
	I1210 01:01:35.823473  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 40/120
	I1210 01:01:36.824899  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 41/120
	I1210 01:01:37.826200  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 42/120
	I1210 01:01:38.827613  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 43/120
	I1210 01:01:39.828964  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 44/120
	I1210 01:01:40.830804  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 45/120
	I1210 01:01:41.832157  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 46/120
	I1210 01:01:42.833474  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 47/120
	I1210 01:01:43.835028  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 48/120
	I1210 01:01:44.836957  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 49/120
	I1210 01:01:45.839207  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 50/120
	I1210 01:01:46.840702  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 51/120
	I1210 01:01:47.842268  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 52/120
	I1210 01:01:48.843781  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 53/120
	I1210 01:01:49.845368  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 54/120
	I1210 01:01:50.847496  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 55/120
	I1210 01:01:51.848801  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 56/120
	I1210 01:01:52.850237  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 57/120
	I1210 01:01:53.852200  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 58/120
	I1210 01:01:54.853382  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 59/120
	I1210 01:01:55.854523  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 60/120
	I1210 01:01:56.855799  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 61/120
	I1210 01:01:57.856939  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 62/120
	I1210 01:01:58.858368  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 63/120
	I1210 01:01:59.859648  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 64/120
	I1210 01:02:00.861470  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 65/120
	I1210 01:02:01.862939  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 66/120
	I1210 01:02:02.864172  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 67/120
	I1210 01:02:03.865514  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 68/120
	I1210 01:02:04.866884  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 69/120
	I1210 01:02:05.869344  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 70/120
	I1210 01:02:06.870590  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 71/120
	I1210 01:02:07.872069  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 72/120
	I1210 01:02:08.873556  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 73/120
	I1210 01:02:09.874909  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 74/120
	I1210 01:02:10.876361  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 75/120
	I1210 01:02:11.877632  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 76/120
	I1210 01:02:12.878915  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 77/120
	I1210 01:02:13.880252  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 78/120
	I1210 01:02:14.881616  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 79/120
	I1210 01:02:15.883643  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 80/120
	I1210 01:02:16.884887  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 81/120
	I1210 01:02:17.886103  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 82/120
	I1210 01:02:18.887297  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 83/120
	I1210 01:02:19.888503  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 84/120
	I1210 01:02:20.890843  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 85/120
	I1210 01:02:21.892270  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 86/120
	I1210 01:02:22.893488  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 87/120
	I1210 01:02:23.894715  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 88/120
	I1210 01:02:24.895955  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 89/120
	I1210 01:02:25.898164  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 90/120
	I1210 01:02:26.899358  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 91/120
	I1210 01:02:27.900727  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 92/120
	I1210 01:02:28.901990  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 93/120
	I1210 01:02:29.903389  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 94/120
	I1210 01:02:30.905163  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 95/120
	I1210 01:02:31.906632  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 96/120
	I1210 01:02:32.907770  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 97/120
	I1210 01:02:33.908875  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 98/120
	I1210 01:02:34.910158  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 99/120
	I1210 01:02:35.912205  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 100/120
	I1210 01:02:36.913381  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 101/120
	I1210 01:02:37.914624  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 102/120
	I1210 01:02:38.915925  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 103/120
	I1210 01:02:39.917538  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 104/120
	I1210 01:02:40.919266  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 105/120
	I1210 01:02:41.921331  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 106/120
	I1210 01:02:42.922487  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 107/120
	I1210 01:02:43.923746  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 108/120
	I1210 01:02:44.925026  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 109/120
	I1210 01:02:45.927219  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 110/120
	I1210 01:02:46.928437  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 111/120
	I1210 01:02:47.930092  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 112/120
	I1210 01:02:48.931437  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 113/120
	I1210 01:02:49.932833  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 114/120
	I1210 01:02:50.934662  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 115/120
	I1210 01:02:51.935979  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 116/120
	I1210 01:02:52.937225  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 117/120
	I1210 01:02:53.938691  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 118/120
	I1210 01:02:54.940201  131556 main.go:141] libmachine: (embed-certs-274758) Waiting for machine to stop 119/120
	I1210 01:02:55.941329  131556 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1210 01:02:55.941381  131556 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1210 01:02:55.943095  131556 out.go:201] 
	W1210 01:02:55.944311  131556 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1210 01:02:55.944328  131556 out.go:270] * 
	* 
	W1210 01:02:55.947838  131556 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 01:02:55.949180  131556 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-274758 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-274758 -n embed-certs-274758
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-274758 -n embed-certs-274758: exit status 3 (18.446386434s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 01:03:14.398865  132230 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.76:22: connect: no route to host
	E1210 01:03:14.398888  132230 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.76:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-274758" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-901295 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-901295 --alsologtostderr -v=3: exit status 82 (2m0.469220899s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-901295"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 01:02:11.203755  132006 out.go:345] Setting OutFile to fd 1 ...
	I1210 01:02:11.203900  132006 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:02:11.203914  132006 out.go:358] Setting ErrFile to fd 2...
	I1210 01:02:11.203920  132006 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:02:11.204090  132006 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 01:02:11.204294  132006 out.go:352] Setting JSON to false
	I1210 01:02:11.204375  132006 mustload.go:65] Loading cluster: default-k8s-diff-port-901295
	I1210 01:02:11.204723  132006 config.go:182] Loaded profile config "default-k8s-diff-port-901295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:02:11.204784  132006 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/config.json ...
	I1210 01:02:11.204944  132006 mustload.go:65] Loading cluster: default-k8s-diff-port-901295
	I1210 01:02:11.205061  132006 config.go:182] Loaded profile config "default-k8s-diff-port-901295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:02:11.205098  132006 stop.go:39] StopHost: default-k8s-diff-port-901295
	I1210 01:02:11.205443  132006 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:02:11.205503  132006 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:02:11.220755  132006 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35949
	I1210 01:02:11.221281  132006 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:02:11.221873  132006 main.go:141] libmachine: Using API Version  1
	I1210 01:02:11.221893  132006 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:02:11.222322  132006 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:02:11.224532  132006 out.go:177] * Stopping node "default-k8s-diff-port-901295"  ...
	I1210 01:02:11.225775  132006 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1210 01:02:11.225819  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:02:11.226057  132006 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1210 01:02:11.226087  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:02:11.228703  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:02:11.229087  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:00:58 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:02:11.229119  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:02:11.229227  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:02:11.229375  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:02:11.229533  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:02:11.229668  132006 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:02:11.308198  132006 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1210 01:02:11.370244  132006 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1210 01:02:11.428782  132006 main.go:141] libmachine: Stopping "default-k8s-diff-port-901295"...
	I1210 01:02:11.428816  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:02:11.430794  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Stop
	I1210 01:02:11.434737  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 0/120
	I1210 01:02:12.436011  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 1/120
	I1210 01:02:13.437481  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 2/120
	I1210 01:02:14.438845  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 3/120
	I1210 01:02:15.441034  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 4/120
	I1210 01:02:16.443042  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 5/120
	I1210 01:02:17.444625  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 6/120
	I1210 01:02:18.446014  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 7/120
	I1210 01:02:19.447470  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 8/120
	I1210 01:02:20.448928  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 9/120
	I1210 01:02:21.451044  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 10/120
	I1210 01:02:22.452212  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 11/120
	I1210 01:02:23.453671  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 12/120
	I1210 01:02:24.455018  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 13/120
	I1210 01:02:25.457123  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 14/120
	I1210 01:02:26.459304  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 15/120
	I1210 01:02:27.460661  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 16/120
	I1210 01:02:28.461839  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 17/120
	I1210 01:02:29.463069  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 18/120
	I1210 01:02:30.464368  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 19/120
	I1210 01:02:31.466484  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 20/120
	I1210 01:02:32.467644  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 21/120
	I1210 01:02:33.468969  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 22/120
	I1210 01:02:34.470071  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 23/120
	I1210 01:02:35.471289  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 24/120
	I1210 01:02:36.473035  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 25/120
	I1210 01:02:37.474409  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 26/120
	I1210 01:02:38.475460  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 27/120
	I1210 01:02:39.476819  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 28/120
	I1210 01:02:40.478129  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 29/120
	I1210 01:02:41.480322  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 30/120
	I1210 01:02:42.481490  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 31/120
	I1210 01:02:43.483011  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 32/120
	I1210 01:02:44.484213  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 33/120
	I1210 01:02:45.485541  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 34/120
	I1210 01:02:46.487336  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 35/120
	I1210 01:02:47.488769  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 36/120
	I1210 01:02:48.490018  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 37/120
	I1210 01:02:49.491493  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 38/120
	I1210 01:02:50.492614  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 39/120
	I1210 01:02:51.494939  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 40/120
	I1210 01:02:52.497146  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 41/120
	I1210 01:02:53.498483  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 42/120
	I1210 01:02:54.499795  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 43/120
	I1210 01:02:55.501235  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 44/120
	I1210 01:02:56.503050  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 45/120
	I1210 01:02:57.505003  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 46/120
	I1210 01:02:58.506403  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 47/120
	I1210 01:02:59.507772  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 48/120
	I1210 01:03:00.509148  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 49/120
	I1210 01:03:01.511304  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 50/120
	I1210 01:03:02.512828  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 51/120
	I1210 01:03:03.514155  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 52/120
	I1210 01:03:04.515586  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 53/120
	I1210 01:03:05.516838  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 54/120
	I1210 01:03:06.518786  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 55/120
	I1210 01:03:07.520880  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 56/120
	I1210 01:03:08.522340  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 57/120
	I1210 01:03:09.523584  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 58/120
	I1210 01:03:10.524898  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 59/120
	I1210 01:03:11.526993  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 60/120
	I1210 01:03:12.528423  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 61/120
	I1210 01:03:13.529775  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 62/120
	I1210 01:03:14.530784  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 63/120
	I1210 01:03:15.532321  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 64/120
	I1210 01:03:16.534286  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 65/120
	I1210 01:03:17.535658  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 66/120
	I1210 01:03:18.537111  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 67/120
	I1210 01:03:19.538433  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 68/120
	I1210 01:03:20.539757  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 69/120
	I1210 01:03:21.541217  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 70/120
	I1210 01:03:22.542602  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 71/120
	I1210 01:03:23.543887  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 72/120
	I1210 01:03:24.545304  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 73/120
	I1210 01:03:25.546723  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 74/120
	I1210 01:03:26.548620  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 75/120
	I1210 01:03:27.549999  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 76/120
	I1210 01:03:28.551314  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 77/120
	I1210 01:03:29.552667  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 78/120
	I1210 01:03:30.553977  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 79/120
	I1210 01:03:31.556114  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 80/120
	I1210 01:03:32.557478  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 81/120
	I1210 01:03:33.558766  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 82/120
	I1210 01:03:34.561160  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 83/120
	I1210 01:03:35.562670  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 84/120
	I1210 01:03:36.564737  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 85/120
	I1210 01:03:37.565969  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 86/120
	I1210 01:03:38.567458  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 87/120
	I1210 01:03:39.568943  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 88/120
	I1210 01:03:40.570230  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 89/120
	I1210 01:03:41.572516  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 90/120
	I1210 01:03:42.574035  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 91/120
	I1210 01:03:43.575426  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 92/120
	I1210 01:03:44.577122  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 93/120
	I1210 01:03:45.578450  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 94/120
	I1210 01:03:46.580349  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 95/120
	I1210 01:03:47.581735  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 96/120
	I1210 01:03:48.583079  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 97/120
	I1210 01:03:49.584435  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 98/120
	I1210 01:03:50.585759  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 99/120
	I1210 01:03:51.587964  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 100/120
	I1210 01:03:52.589379  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 101/120
	I1210 01:03:53.590960  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 102/120
	I1210 01:03:54.592623  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 103/120
	I1210 01:03:55.594063  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 104/120
	I1210 01:03:56.596070  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 105/120
	I1210 01:03:57.597530  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 106/120
	I1210 01:03:58.599014  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 107/120
	I1210 01:03:59.600405  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 108/120
	I1210 01:04:00.601881  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 109/120
	I1210 01:04:01.604488  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 110/120
	I1210 01:04:02.605743  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 111/120
	I1210 01:04:03.607098  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 112/120
	I1210 01:04:04.608543  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 113/120
	I1210 01:04:05.609928  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 114/120
	I1210 01:04:06.612266  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 115/120
	I1210 01:04:07.613520  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 116/120
	I1210 01:04:08.614762  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 117/120
	I1210 01:04:09.616080  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 118/120
	I1210 01:04:10.617407  132006 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for machine to stop 119/120
	I1210 01:04:11.618670  132006 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1210 01:04:11.618734  132006 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1210 01:04:11.620604  132006 out.go:201] 
	W1210 01:04:11.621934  132006 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1210 01:04:11.621950  132006 out.go:270] * 
	* 
	W1210 01:04:11.625146  132006 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 01:04:11.626244  132006 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-901295 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-901295 -n default-k8s-diff-port-901295
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-901295 -n default-k8s-diff-port-901295: exit status 3 (18.547471212s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 01:04:30.174890  132913 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.193:22: connect: no route to host
	E1210 01:04:30.174914  132913 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.193:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-901295" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-094470 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-094470 create -f testdata/busybox.yaml: exit status 1 (43.84419ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-094470" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-094470 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094470 -n old-k8s-version-094470
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094470 -n old-k8s-version-094470: exit status 6 (212.591045ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 01:02:56.448573  132299 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-094470" does not appear in /home/jenkins/minikube-integration/20062-79135/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-094470" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094470 -n old-k8s-version-094470
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094470 -n old-k8s-version-094470: exit status 6 (215.251195ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 01:02:56.663610  132330 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-094470" does not appear in /home/jenkins/minikube-integration/20062-79135/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-094470" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (97.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-094470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-094470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m37.163339763s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-094470 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-094470 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-094470 describe deploy/metrics-server -n kube-system: exit status 1 (43.701562ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-094470" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-094470 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094470 -n old-k8s-version-094470
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094470 -n old-k8s-version-094470: exit status 6 (216.86772ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 01:04:34.087967  133067 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-094470" does not appear in /home/jenkins/minikube-integration/20062-79135/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-094470" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (97.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-584179 -n no-preload-584179
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-584179 -n no-preload-584179: exit status 3 (3.167497483s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 01:03:12.958889  132443 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.169:22: connect: no route to host
	E1210 01:03:12.958913  132443 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.169:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-584179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-584179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152586246s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.169:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-584179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-584179 -n no-preload-584179
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-584179 -n no-preload-584179: exit status 3 (3.06281163s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 01:03:22.174893  132559 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.169:22: connect: no route to host
	E1210 01:03:22.174921  132559 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.169:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-584179" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-274758 -n embed-certs-274758
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-274758 -n embed-certs-274758: exit status 3 (3.167934702s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 01:03:17.566869  132508 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.76:22: connect: no route to host
	E1210 01:03:17.566885  132508 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.76:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-274758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-274758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152225662s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.76:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-274758 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-274758 -n embed-certs-274758
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-274758 -n embed-certs-274758: exit status 3 (3.063548626s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 01:03:26.782934  132646 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.76:22: connect: no route to host
	E1210 01:03:26.782955  132646 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.76:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-274758" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-901295 -n default-k8s-diff-port-901295
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-901295 -n default-k8s-diff-port-901295: exit status 3 (3.167942671s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 01:04:33.342866  133008 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.193:22: connect: no route to host
	E1210 01:04:33.342888  133008 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.193:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-901295 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-901295 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152730982s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.193:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-901295 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-901295 -n default-k8s-diff-port-901295
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-901295 -n default-k8s-diff-port-901295: exit status 3 (3.063153786s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1210 01:04:42.558949  133200 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.193:22: connect: no route to host
	E1210 01:04:42.558972  133200 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.193:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-901295" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (726.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-094470 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-094470 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m2.987156553s)

                                                
                                                
-- stdout --
	* [old-k8s-version-094470] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-094470" primary control-plane node in "old-k8s-version-094470" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-094470" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 01:04:39.621773  133241 out.go:345] Setting OutFile to fd 1 ...
	I1210 01:04:39.622013  133241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:04:39.622023  133241 out.go:358] Setting ErrFile to fd 2...
	I1210 01:04:39.622028  133241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:04:39.622208  133241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 01:04:39.622738  133241 out.go:352] Setting JSON to false
	I1210 01:04:39.623639  133241 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10031,"bootTime":1733782649,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 01:04:39.623702  133241 start.go:139] virtualization: kvm guest
	I1210 01:04:39.625937  133241 out.go:177] * [old-k8s-version-094470] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 01:04:39.627386  133241 notify.go:220] Checking for updates...
	I1210 01:04:39.627400  133241 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 01:04:39.628543  133241 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 01:04:39.629624  133241 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:04:39.630669  133241 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 01:04:39.631704  133241 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 01:04:39.632720  133241 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 01:04:39.634145  133241 config.go:182] Loaded profile config "old-k8s-version-094470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1210 01:04:39.634503  133241 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:04:39.634555  133241 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:04:39.649360  133241 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33875
	I1210 01:04:39.649781  133241 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:04:39.650365  133241 main.go:141] libmachine: Using API Version  1
	I1210 01:04:39.650386  133241 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:04:39.650792  133241 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:04:39.650972  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:04:39.652566  133241 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1210 01:04:39.653656  133241 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 01:04:39.653939  133241 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:04:39.653978  133241 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:04:39.668012  133241 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42639
	I1210 01:04:39.668397  133241 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:04:39.668820  133241 main.go:141] libmachine: Using API Version  1
	I1210 01:04:39.668838  133241 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:04:39.669105  133241 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:04:39.669217  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:04:39.701970  133241 out.go:177] * Using the kvm2 driver based on existing profile
	I1210 01:04:39.703061  133241 start.go:297] selected driver: kvm2
	I1210 01:04:39.703079  133241 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-094470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:04:39.703227  133241 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 01:04:39.703888  133241 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 01:04:39.703965  133241 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 01:04:39.718020  133241 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1210 01:04:39.718400  133241 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:04:39.718434  133241 cni.go:84] Creating CNI manager for ""
	I1210 01:04:39.718474  133241 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:04:39.718512  133241 start.go:340] cluster config:
	{Name:old-k8s-version-094470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-094470 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:04:39.718644  133241 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 01:04:39.720160  133241 out.go:177] * Starting "old-k8s-version-094470" primary control-plane node in "old-k8s-version-094470" cluster
	I1210 01:04:39.721352  133241 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1210 01:04:39.721383  133241 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1210 01:04:39.721392  133241 cache.go:56] Caching tarball of preloaded images
	I1210 01:04:39.721455  133241 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 01:04:39.721464  133241 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1210 01:04:39.721545  133241 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/config.json ...
	I1210 01:04:39.721707  133241 start.go:360] acquireMachinesLock for old-k8s-version-094470: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 01:08:17.578801  133241 start.go:364] duration metric: took 3m37.857041189s to acquireMachinesLock for "old-k8s-version-094470"
	I1210 01:08:17.578868  133241 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:08:17.578876  133241 fix.go:54] fixHost starting: 
	I1210 01:08:17.579295  133241 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:08:17.579353  133241 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:08:17.595770  133241 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46811
	I1210 01:08:17.596141  133241 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:08:17.596669  133241 main.go:141] libmachine: Using API Version  1
	I1210 01:08:17.596693  133241 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:08:17.597084  133241 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:08:17.597263  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:17.597405  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetState
	I1210 01:08:17.598931  133241 fix.go:112] recreateIfNeeded on old-k8s-version-094470: state=Stopped err=<nil>
	I1210 01:08:17.598957  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	W1210 01:08:17.599124  133241 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:08:17.600962  133241 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-094470" ...
	I1210 01:08:17.602512  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .Start
	I1210 01:08:17.602729  133241 main.go:141] libmachine: (old-k8s-version-094470) Ensuring networks are active...
	I1210 01:08:17.603418  133241 main.go:141] libmachine: (old-k8s-version-094470) Ensuring network default is active
	I1210 01:08:17.603788  133241 main.go:141] libmachine: (old-k8s-version-094470) Ensuring network mk-old-k8s-version-094470 is active
	I1210 01:08:17.604284  133241 main.go:141] libmachine: (old-k8s-version-094470) Getting domain xml...
	I1210 01:08:17.605020  133241 main.go:141] libmachine: (old-k8s-version-094470) Creating domain...
	I1210 01:08:18.869767  133241 main.go:141] libmachine: (old-k8s-version-094470) Waiting to get IP...
	I1210 01:08:18.870786  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:18.871226  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:18.871282  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:18.871190  134112 retry.go:31] will retry after 260.195661ms: waiting for machine to come up
	I1210 01:08:19.132624  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:19.133091  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:19.133113  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:19.133034  134112 retry.go:31] will retry after 241.852579ms: waiting for machine to come up
	I1210 01:08:19.376814  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:19.377485  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:19.377520  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:19.377420  134112 retry.go:31] will retry after 410.574957ms: waiting for machine to come up
	I1210 01:08:19.790282  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:19.790868  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:19.790898  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:19.790828  134112 retry.go:31] will retry after 535.183165ms: waiting for machine to come up
	I1210 01:08:20.327434  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:20.327936  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:20.327972  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:20.327862  134112 retry.go:31] will retry after 729.193633ms: waiting for machine to come up
	I1210 01:08:21.058815  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:21.059274  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:21.059302  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:21.059224  134112 retry.go:31] will retry after 578.788415ms: waiting for machine to come up
	I1210 01:08:21.640036  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:21.640572  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:21.640604  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:21.640523  134112 retry.go:31] will retry after 1.113559472s: waiting for machine to come up
	I1210 01:08:22.755259  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:22.755716  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:22.755741  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:22.755681  134112 retry.go:31] will retry after 940.416935ms: waiting for machine to come up
	I1210 01:08:23.698216  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:23.698652  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:23.698684  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:23.698608  134112 retry.go:31] will retry after 1.575038679s: waiting for machine to come up
	I1210 01:08:25.276751  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:25.277027  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:25.277058  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:25.276996  134112 retry.go:31] will retry after 1.531276871s: waiting for machine to come up
	I1210 01:08:26.809860  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:26.810332  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:26.810365  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:26.810270  134112 retry.go:31] will retry after 2.029725217s: waiting for machine to come up
	I1210 01:08:28.842419  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:28.842945  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:28.842979  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:28.842895  134112 retry.go:31] will retry after 2.777752063s: waiting for machine to come up
	I1210 01:08:31.623742  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:31.624253  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:31.624289  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:31.624189  134112 retry.go:31] will retry after 3.852910592s: waiting for machine to come up
	I1210 01:08:35.480373  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.480901  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has current primary IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.480926  133241 main.go:141] libmachine: (old-k8s-version-094470) Found IP for machine: 192.168.61.11
	I1210 01:08:35.480955  133241 main.go:141] libmachine: (old-k8s-version-094470) Reserving static IP address...
	I1210 01:08:35.481323  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "old-k8s-version-094470", mac: "52:54:00:00:f3:52", ip: "192.168.61.11"} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.481352  133241 main.go:141] libmachine: (old-k8s-version-094470) Reserved static IP address: 192.168.61.11
	I1210 01:08:35.481370  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | skip adding static IP to network mk-old-k8s-version-094470 - found existing host DHCP lease matching {name: "old-k8s-version-094470", mac: "52:54:00:00:f3:52", ip: "192.168.61.11"}
	I1210 01:08:35.481392  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | Getting to WaitForSSH function...
	I1210 01:08:35.481408  133241 main.go:141] libmachine: (old-k8s-version-094470) Waiting for SSH to be available...
	I1210 01:08:35.483785  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.484269  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.484314  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.484458  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | Using SSH client type: external
	I1210 01:08:35.484493  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa (-rw-------)
	I1210 01:08:35.484526  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:08:35.484548  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | About to run SSH command:
	I1210 01:08:35.484557  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | exit 0
	I1210 01:08:35.610216  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | SSH cmd err, output: <nil>: 
	I1210 01:08:35.610554  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetConfigRaw
	I1210 01:08:35.611179  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:35.613811  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.614184  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.614221  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.614448  133241 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/config.json ...
	I1210 01:08:35.614659  133241 machine.go:93] provisionDockerMachine start ...
	I1210 01:08:35.614681  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:35.614861  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.616965  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.617478  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.617507  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.617606  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:35.617741  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.617880  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.617993  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:35.618166  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:35.618416  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:35.618431  133241 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:08:35.730293  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:08:35.730326  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 01:08:35.730614  133241 buildroot.go:166] provisioning hostname "old-k8s-version-094470"
	I1210 01:08:35.730647  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 01:08:35.730902  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.733604  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.733943  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.733963  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.734110  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:35.734290  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.734436  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.734589  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:35.734737  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:35.734921  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:35.734937  133241 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-094470 && echo "old-k8s-version-094470" | sudo tee /etc/hostname
	I1210 01:08:35.856219  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-094470
	
	I1210 01:08:35.856272  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.859777  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.860157  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.860194  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.860364  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:35.860590  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.860808  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.860948  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:35.861145  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:35.861370  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:35.861391  133241 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-094470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-094470/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-094470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:08:35.984487  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:08:35.984523  133241 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:08:35.984571  133241 buildroot.go:174] setting up certificates
	I1210 01:08:35.984585  133241 provision.go:84] configureAuth start
	I1210 01:08:35.984596  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 01:08:35.984888  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:35.987515  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.987891  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.987920  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.988078  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.990428  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.990806  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.990838  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.991028  133241 provision.go:143] copyHostCerts
	I1210 01:08:35.991108  133241 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:08:35.991125  133241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:08:35.991208  133241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:08:35.991378  133241 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:08:35.991396  133241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:08:35.991436  133241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:08:35.991548  133241 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:08:35.991560  133241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:08:35.991593  133241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:08:35.991684  133241 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-094470 san=[127.0.0.1 192.168.61.11 localhost minikube old-k8s-version-094470]
	I1210 01:08:36.166767  133241 provision.go:177] copyRemoteCerts
	I1210 01:08:36.166825  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:08:36.166872  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.169777  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.170166  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.170196  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.170452  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.170662  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.170837  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.170985  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.255600  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:08:36.277974  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1210 01:08:36.299608  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 01:08:36.320325  133241 provision.go:87] duration metric: took 335.730286ms to configureAuth
	I1210 01:08:36.320346  133241 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:08:36.320502  133241 config.go:182] Loaded profile config "old-k8s-version-094470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1210 01:08:36.320572  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.323358  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.323810  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.323836  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.324012  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.324213  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.324351  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.324479  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.324608  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:36.324773  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:36.324789  133241 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:08:36.538020  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:08:36.538052  133241 machine.go:96] duration metric: took 923.37742ms to provisionDockerMachine
	I1210 01:08:36.538065  133241 start.go:293] postStartSetup for "old-k8s-version-094470" (driver="kvm2")
	I1210 01:08:36.538075  133241 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:08:36.538092  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.538437  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:08:36.538473  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.540836  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.541187  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.541229  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.541400  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.541594  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.541728  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.541852  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.623740  133241 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:08:36.627323  133241 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:08:36.627343  133241 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:08:36.627405  133241 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:08:36.627487  133241 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:08:36.627568  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:08:36.635720  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:36.656793  133241 start.go:296] duration metric: took 118.715633ms for postStartSetup
	I1210 01:08:36.656832  133241 fix.go:56] duration metric: took 19.077955657s for fixHost
	I1210 01:08:36.656853  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.659288  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.659586  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.659618  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.659772  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.659961  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.660132  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.660250  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.660391  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:36.660552  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:36.660562  133241 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:08:36.766355  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792916.738645658
	
	I1210 01:08:36.766375  133241 fix.go:216] guest clock: 1733792916.738645658
	I1210 01:08:36.766382  133241 fix.go:229] Guest: 2024-12-10 01:08:36.738645658 +0000 UTC Remote: 2024-12-10 01:08:36.656836618 +0000 UTC m=+237.074026661 (delta=81.80904ms)
	I1210 01:08:36.766420  133241 fix.go:200] guest clock delta is within tolerance: 81.80904ms
	I1210 01:08:36.766429  133241 start.go:83] releasing machines lock for "old-k8s-version-094470", held for 19.187587757s
	I1210 01:08:36.766461  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.766761  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:36.769758  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.770129  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.770150  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.770309  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.770818  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.770992  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.771090  133241 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:08:36.771157  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.771182  133241 ssh_runner.go:195] Run: cat /version.json
	I1210 01:08:36.771203  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.773923  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774103  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774272  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.774292  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774434  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.774545  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.774585  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774616  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.774728  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.774817  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.774843  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.774975  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.775004  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.775148  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.875634  133241 ssh_runner.go:195] Run: systemctl --version
	I1210 01:08:36.880774  133241 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:08:37.023282  133241 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:08:37.029380  133241 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:08:37.029436  133241 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:08:37.044071  133241 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:08:37.044093  133241 start.go:495] detecting cgroup driver to use...
	I1210 01:08:37.044157  133241 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:08:37.058626  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:08:37.070607  133241 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:08:37.070659  133241 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:08:37.086913  133241 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:08:37.102676  133241 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:08:37.221862  133241 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:08:37.373086  133241 docker.go:233] disabling docker service ...
	I1210 01:08:37.373166  133241 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:08:37.386711  133241 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:08:37.399414  133241 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:08:37.546237  133241 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:08:37.660681  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:08:37.673736  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:08:37.690107  133241 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1210 01:08:37.690180  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.700871  133241 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:08:37.700920  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.711545  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.722078  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.732603  133241 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:08:37.743617  133241 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:08:37.753641  133241 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:08:37.753699  133241 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:08:37.765737  133241 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:08:37.774173  133241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:37.891188  133241 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:08:37.983170  133241 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:08:37.983248  133241 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:08:37.987987  133241 start.go:563] Will wait 60s for crictl version
	I1210 01:08:37.988049  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:37.993150  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:08:38.045191  133241 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:08:38.045281  133241 ssh_runner.go:195] Run: crio --version
	I1210 01:08:38.071768  133241 ssh_runner.go:195] Run: crio --version
	I1210 01:08:38.100869  133241 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1210 01:08:38.102141  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:38.104790  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:38.105112  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:38.105143  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:38.105337  133241 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1210 01:08:38.109454  133241 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:38.120925  133241 kubeadm.go:883] updating cluster {Name:old-k8s-version-094470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:08:38.121060  133241 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1210 01:08:38.121130  133241 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:38.169400  133241 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1210 01:08:38.169462  133241 ssh_runner.go:195] Run: which lz4
	I1210 01:08:38.172973  133241 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 01:08:38.176684  133241 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 01:08:38.176715  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1210 01:08:39.654620  133241 crio.go:462] duration metric: took 1.481673499s to copy over tarball
	I1210 01:08:39.654705  133241 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 01:08:42.473447  133241 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.818699717s)
	I1210 01:08:42.473486  133241 crio.go:469] duration metric: took 2.818833041s to extract the tarball
	I1210 01:08:42.473496  133241 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 01:08:42.514635  133241 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:42.546161  133241 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1210 01:08:42.546204  133241 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 01:08:42.546276  133241 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:08:42.546315  133241 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.546339  133241 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.546344  133241 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.546347  133241 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.546306  133241 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1210 01:08:42.546372  133241 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.546315  133241 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.548134  133241 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.548150  133241 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1210 01:08:42.548149  133241 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.548162  133241 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:08:42.548134  133241 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.548135  133241 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.548138  133241 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.548326  133241 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.700402  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.706096  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.716669  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.717025  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.723380  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.727890  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.740867  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1210 01:08:42.775300  133241 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1210 01:08:42.775345  133241 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.775393  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.827802  133241 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1210 01:08:42.827855  133241 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.827873  133241 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1210 01:08:42.827906  133241 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.827936  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.827953  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.851952  133241 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1210 01:08:42.851998  133241 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.852063  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872369  133241 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1210 01:08:42.872408  133241 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.872446  133241 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1210 01:08:42.872479  133241 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1210 01:08:42.872489  133241 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.872497  133241 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1210 01:08:42.872516  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.872535  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872535  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872458  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872578  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.872638  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.872672  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.952963  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.952964  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 01:08:42.956464  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.956535  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.956580  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.956614  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.956681  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:43.035636  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 01:08:43.086938  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:43.087032  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:43.104765  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:43.104844  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:43.104891  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 01:08:43.109871  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:43.122137  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 01:08:43.193838  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:43.256301  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:43.256342  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1210 01:08:43.256431  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1210 01:08:43.258819  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1210 01:08:43.258928  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1210 01:08:43.259011  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1210 01:08:43.281411  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1210 01:08:43.300319  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1210 01:08:43.334327  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:08:43.478183  133241 cache_images.go:92] duration metric: took 931.957836ms to LoadCachedImages
	W1210 01:08:43.478292  133241 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1210 01:08:43.478310  133241 kubeadm.go:934] updating node { 192.168.61.11 8443 v1.20.0 crio true true} ...
	I1210 01:08:43.478501  133241 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-094470 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:08:43.478610  133241 ssh_runner.go:195] Run: crio config
	I1210 01:08:43.523627  133241 cni.go:84] Creating CNI manager for ""
	I1210 01:08:43.523651  133241 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:08:43.523660  133241 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:08:43.523680  133241 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.11 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-094470 NodeName:old-k8s-version-094470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1210 01:08:43.523872  133241 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-094470"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:08:43.523947  133241 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1210 01:08:43.534926  133241 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:08:43.535015  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:08:43.544420  133241 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1210 01:08:43.561582  133241 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:08:43.578427  133241 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1210 01:08:43.595593  133241 ssh_runner.go:195] Run: grep 192.168.61.11	control-plane.minikube.internal$ /etc/hosts
	I1210 01:08:43.599137  133241 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:43.610483  133241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:43.750543  133241 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:08:43.766573  133241 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470 for IP: 192.168.61.11
	I1210 01:08:43.766599  133241 certs.go:194] generating shared ca certs ...
	I1210 01:08:43.766628  133241 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:08:43.766828  133241 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:08:43.766881  133241 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:08:43.766897  133241 certs.go:256] generating profile certs ...
	I1210 01:08:43.767022  133241 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.key
	I1210 01:08:43.767097  133241 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.key.11e7a196
	I1210 01:08:43.767158  133241 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.key
	I1210 01:08:43.767318  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:08:43.767359  133241 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:08:43.767391  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:08:43.767428  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:08:43.767461  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:08:43.767502  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:08:43.767554  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:43.768599  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:08:43.825215  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:08:43.852218  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:08:43.888256  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:08:43.921633  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1210 01:08:43.954815  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 01:08:43.986660  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:08:44.009065  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 01:08:44.030476  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:08:44.053232  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:08:44.078371  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:08:44.100076  133241 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:08:44.115731  133241 ssh_runner.go:195] Run: openssl version
	I1210 01:08:44.121192  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:08:44.130554  133241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:08:44.134639  133241 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:08:44.134697  133241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:08:44.140323  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:08:44.150593  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:08:44.160638  133241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:44.165053  133241 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:44.165121  133241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:44.170391  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:08:44.180113  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:08:44.189938  133241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:08:44.193880  133241 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:08:44.193931  133241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:08:44.199419  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:08:44.209346  133241 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:08:44.213474  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:08:44.218965  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:08:44.224344  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:08:44.229835  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:08:44.235365  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:08:44.240697  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:08:44.245999  133241 kubeadm.go:392] StartCluster: {Name:old-k8s-version-094470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:08:44.246102  133241 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:08:44.246164  133241 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:44.287050  133241 cri.go:89] found id: ""
	I1210 01:08:44.287167  133241 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:08:44.297028  133241 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:08:44.297044  133241 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:08:44.297092  133241 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:08:44.306118  133241 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:08:44.307143  133241 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-094470" does not appear in /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:08:44.307777  133241 kubeconfig.go:62] /home/jenkins/minikube-integration/20062-79135/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-094470" cluster setting kubeconfig missing "old-k8s-version-094470" context setting]
	I1210 01:08:44.308663  133241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:08:44.394164  133241 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:08:44.406683  133241 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.11
	I1210 01:08:44.406723  133241 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:08:44.406739  133241 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:08:44.406799  133241 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:44.444917  133241 cri.go:89] found id: ""
	I1210 01:08:44.444995  133241 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:08:44.465693  133241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:08:44.475399  133241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:08:44.475424  133241 kubeadm.go:157] found existing configuration files:
	
	I1210 01:08:44.475482  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:08:44.483802  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:08:44.483844  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:08:44.492395  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:08:44.501080  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:08:44.501141  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:08:44.509973  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:08:44.518103  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:08:44.518176  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:08:44.527145  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:08:44.535124  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:08:44.535179  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:08:44.543773  133241 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:08:44.552533  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:44.667527  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.368529  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.572674  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.671006  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.759483  133241 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:08:45.759588  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:46.260599  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:46.759851  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:47.260403  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:47.760555  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:48.259665  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:48.760390  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:49.260450  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:49.759795  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:50.260493  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:50.760146  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:51.259783  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:51.760554  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:52.260543  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:52.760452  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:53.260523  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:53.759677  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:54.259750  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:54.760444  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:55.259774  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:55.759929  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:56.260379  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:56.759985  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:57.260495  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:57.759699  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:58.260475  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:58.759732  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:59.260424  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:59.760246  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:00.260582  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:00.760701  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:01.259686  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:01.759889  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:02.260232  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:02.759769  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:03.259935  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:03.760670  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.260443  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.760421  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.260154  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.760313  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:06.259902  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:06.760365  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:07.260060  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:07.759720  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:08.260052  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:08.759734  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:09.260736  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:09.760075  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:10.259970  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:10.760547  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:11.259999  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:11.760315  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:12.260121  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:12.760217  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:13.259996  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:13.760635  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:14.259738  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:14.759976  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:15.259717  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:15.760410  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:16.260034  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:16.759708  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:17.260433  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:17.760687  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:18.260284  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:18.760557  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:19.260362  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:19.759901  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:20.260224  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:20.760523  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:21.259846  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:21.759997  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:22.259939  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:22.760414  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:23.260359  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:23.760075  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:24.260519  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:24.760537  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:25.259994  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:25.760205  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:26.260504  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:26.759648  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:27.259995  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:27.760383  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:28.259992  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:28.760004  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:29.260496  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:29.760244  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:30.260534  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:30.760426  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:31.259767  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:31.759951  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:32.259919  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:32.760161  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:33.260272  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:33.759885  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:34.259970  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:34.759835  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.260276  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.759791  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:36.259684  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:36.760649  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:37.259922  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:37.760558  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:38.260712  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:38.759679  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:39.259678  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:39.759613  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:40.260466  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:40.760527  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:41.260450  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:41.759950  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:42.260075  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:42.760661  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:43.259780  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:43.759690  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:44.260376  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:44.759802  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:45.260533  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:45.760410  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:45.760500  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:45.797499  133241 cri.go:89] found id: ""
	I1210 01:09:45.797522  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.797533  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:45.797539  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:45.797596  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:45.827841  133241 cri.go:89] found id: ""
	I1210 01:09:45.827872  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.827885  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:45.827893  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:45.827952  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:45.861227  133241 cri.go:89] found id: ""
	I1210 01:09:45.861251  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.861259  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:45.861264  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:45.861331  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:45.895142  133241 cri.go:89] found id: ""
	I1210 01:09:45.895174  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.895185  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:45.895191  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:45.895266  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:45.931113  133241 cri.go:89] found id: ""
	I1210 01:09:45.931146  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.931157  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:45.931164  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:45.931251  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:45.964348  133241 cri.go:89] found id: ""
	I1210 01:09:45.964388  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.964396  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:45.964402  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:45.964453  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:45.997808  133241 cri.go:89] found id: ""
	I1210 01:09:45.997829  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.997837  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:45.997842  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:45.997888  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:46.028464  133241 cri.go:89] found id: ""
	I1210 01:09:46.028490  133241 logs.go:282] 0 containers: []
	W1210 01:09:46.028499  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:46.028508  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:46.028524  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:46.136225  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:46.136257  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:46.136275  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:46.211654  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:46.211686  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:46.254008  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:46.254046  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:46.305985  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:46.306020  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:48.818889  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:48.831511  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:48.831575  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:48.863536  133241 cri.go:89] found id: ""
	I1210 01:09:48.863566  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.863577  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:48.863585  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:48.863642  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:48.895340  133241 cri.go:89] found id: ""
	I1210 01:09:48.895362  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.895371  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:48.895378  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:48.895439  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:48.930962  133241 cri.go:89] found id: ""
	I1210 01:09:48.930989  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.930997  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:48.931003  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:48.931060  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:48.966437  133241 cri.go:89] found id: ""
	I1210 01:09:48.966467  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.966479  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:48.966488  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:48.966553  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:49.001290  133241 cri.go:89] found id: ""
	I1210 01:09:49.001321  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.001333  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:49.001340  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:49.001404  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:49.036472  133241 cri.go:89] found id: ""
	I1210 01:09:49.036499  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.036510  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:49.036532  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:49.036609  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:49.066550  133241 cri.go:89] found id: ""
	I1210 01:09:49.066589  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.066600  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:49.066607  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:49.066669  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:49.097358  133241 cri.go:89] found id: ""
	I1210 01:09:49.097383  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.097392  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:49.097402  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:49.097413  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:49.170082  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:49.170116  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:49.209684  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:49.209747  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:49.268714  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:49.268755  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:49.281979  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:49.282014  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:49.350901  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:51.851559  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:51.864804  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:51.864862  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:51.907102  133241 cri.go:89] found id: ""
	I1210 01:09:51.907141  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.907154  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:51.907162  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:51.907218  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:51.937672  133241 cri.go:89] found id: ""
	I1210 01:09:51.937695  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.937702  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:51.937708  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:51.937755  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:51.966886  133241 cri.go:89] found id: ""
	I1210 01:09:51.966911  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.966919  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:51.966925  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:51.966981  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:51.996806  133241 cri.go:89] found id: ""
	I1210 01:09:51.996830  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.996838  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:51.996844  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:51.996901  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:52.028041  133241 cri.go:89] found id: ""
	I1210 01:09:52.028083  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.028091  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:52.028097  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:52.028150  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:52.057921  133241 cri.go:89] found id: ""
	I1210 01:09:52.057946  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.057954  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:52.057960  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:52.058010  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:52.088367  133241 cri.go:89] found id: ""
	I1210 01:09:52.088406  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.088415  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:52.088422  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:52.088487  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:52.117636  133241 cri.go:89] found id: ""
	I1210 01:09:52.117667  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.117679  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:52.117691  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:52.117705  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:52.151628  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:52.151655  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:52.202083  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:52.202116  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:52.214973  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:52.215009  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:52.282101  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:52.282126  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:52.282139  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:54.862326  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:54.874349  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:54.874418  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:54.906983  133241 cri.go:89] found id: ""
	I1210 01:09:54.907006  133241 logs.go:282] 0 containers: []
	W1210 01:09:54.907013  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:54.907019  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:54.907069  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:54.938187  133241 cri.go:89] found id: ""
	I1210 01:09:54.938213  133241 logs.go:282] 0 containers: []
	W1210 01:09:54.938221  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:54.938226  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:54.938290  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:54.974481  133241 cri.go:89] found id: ""
	I1210 01:09:54.974514  133241 logs.go:282] 0 containers: []
	W1210 01:09:54.974526  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:54.974534  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:54.974619  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:55.005904  133241 cri.go:89] found id: ""
	I1210 01:09:55.005928  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.005941  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:55.005949  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:55.006015  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:55.037698  133241 cri.go:89] found id: ""
	I1210 01:09:55.037729  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.037741  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:55.037748  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:55.037816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:55.067926  133241 cri.go:89] found id: ""
	I1210 01:09:55.067958  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.067966  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:55.067971  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:55.068016  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:55.098309  133241 cri.go:89] found id: ""
	I1210 01:09:55.098333  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.098341  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:55.098349  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:55.098400  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:55.145177  133241 cri.go:89] found id: ""
	I1210 01:09:55.145212  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.145221  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:55.145231  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:55.145243  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:55.193307  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:55.193338  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:55.205536  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:55.205558  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:55.271248  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:55.271276  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:55.271295  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:55.349465  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:55.349503  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:57.887749  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:57.899698  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:57.899765  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:57.933170  133241 cri.go:89] found id: ""
	I1210 01:09:57.933196  133241 logs.go:282] 0 containers: []
	W1210 01:09:57.933206  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:57.933214  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:57.933282  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:57.964237  133241 cri.go:89] found id: ""
	I1210 01:09:57.964271  133241 logs.go:282] 0 containers: []
	W1210 01:09:57.964284  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:57.964292  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:57.964360  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:57.996447  133241 cri.go:89] found id: ""
	I1210 01:09:57.996481  133241 logs.go:282] 0 containers: []
	W1210 01:09:57.996493  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:57.996501  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:57.996562  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:58.030007  133241 cri.go:89] found id: ""
	I1210 01:09:58.030034  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.030046  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:58.030054  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:58.030120  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:58.063634  133241 cri.go:89] found id: ""
	I1210 01:09:58.063667  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.063678  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:58.063686  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:58.063748  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:58.095076  133241 cri.go:89] found id: ""
	I1210 01:09:58.095105  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.095114  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:58.095120  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:58.095177  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:58.127107  133241 cri.go:89] found id: ""
	I1210 01:09:58.127147  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.127160  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:58.127169  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:58.127243  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:58.158137  133241 cri.go:89] found id: ""
	I1210 01:09:58.158167  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.158177  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:58.158190  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:58.158213  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:58.209195  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:58.209236  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:58.221816  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:58.221841  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:58.290396  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:58.290416  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:58.290430  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:58.370235  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:58.370265  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:00.908076  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:00.920898  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:00.920985  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:00.955432  133241 cri.go:89] found id: ""
	I1210 01:10:00.955469  133241 logs.go:282] 0 containers: []
	W1210 01:10:00.955481  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:00.955490  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:00.955550  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:00.987580  133241 cri.go:89] found id: ""
	I1210 01:10:00.987606  133241 logs.go:282] 0 containers: []
	W1210 01:10:00.987615  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:00.987621  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:00.987670  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:01.018741  133241 cri.go:89] found id: ""
	I1210 01:10:01.018766  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.018773  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:01.018781  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:01.018840  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:01.049817  133241 cri.go:89] found id: ""
	I1210 01:10:01.049849  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.049860  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:01.049879  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:01.049946  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:01.081736  133241 cri.go:89] found id: ""
	I1210 01:10:01.081765  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.081775  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:01.081781  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:01.081829  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:01.110990  133241 cri.go:89] found id: ""
	I1210 01:10:01.111015  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.111026  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:01.111034  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:01.111096  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:01.140737  133241 cri.go:89] found id: ""
	I1210 01:10:01.140767  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.140777  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:01.140785  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:01.140848  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:01.170628  133241 cri.go:89] found id: ""
	I1210 01:10:01.170662  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.170674  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:01.170686  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:01.170701  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:01.222358  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:01.222389  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:01.235640  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:01.235668  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:01.302726  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:01.302745  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:01.302762  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:01.383817  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:01.383855  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:03.921112  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:03.933517  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:03.933592  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:03.967318  133241 cri.go:89] found id: ""
	I1210 01:10:03.967344  133241 logs.go:282] 0 containers: []
	W1210 01:10:03.967353  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:03.967358  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:03.967411  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:03.998743  133241 cri.go:89] found id: ""
	I1210 01:10:03.998768  133241 logs.go:282] 0 containers: []
	W1210 01:10:03.998776  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:03.998782  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:03.998842  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:04.033209  133241 cri.go:89] found id: ""
	I1210 01:10:04.033235  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.033247  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:04.033255  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:04.033319  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:04.064815  133241 cri.go:89] found id: ""
	I1210 01:10:04.064845  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.064857  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:04.064864  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:04.064921  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:04.098676  133241 cri.go:89] found id: ""
	I1210 01:10:04.098699  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.098707  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:04.098712  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:04.098763  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:04.129693  133241 cri.go:89] found id: ""
	I1210 01:10:04.129720  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.129732  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:04.129741  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:04.129809  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:04.162158  133241 cri.go:89] found id: ""
	I1210 01:10:04.162195  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.162203  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:04.162209  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:04.162276  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:04.194376  133241 cri.go:89] found id: ""
	I1210 01:10:04.194425  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.194436  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:04.194446  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:04.194462  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:04.246674  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:04.246702  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:04.259142  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:04.259169  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:04.330034  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:04.330054  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:04.330067  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:04.410042  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:04.410089  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:06.948623  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:06.960727  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:06.960811  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:06.993176  133241 cri.go:89] found id: ""
	I1210 01:10:06.993217  133241 logs.go:282] 0 containers: []
	W1210 01:10:06.993226  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:06.993231  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:06.993285  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:07.026420  133241 cri.go:89] found id: ""
	I1210 01:10:07.026449  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.026462  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:07.026469  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:07.026541  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:07.060810  133241 cri.go:89] found id: ""
	I1210 01:10:07.060837  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.060847  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:07.060855  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:07.060921  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:07.091336  133241 cri.go:89] found id: ""
	I1210 01:10:07.091376  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.091386  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:07.091393  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:07.091510  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:07.122715  133241 cri.go:89] found id: ""
	I1210 01:10:07.122750  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.122762  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:07.122770  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:07.122822  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:07.154444  133241 cri.go:89] found id: ""
	I1210 01:10:07.154479  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.154490  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:07.154496  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:07.154575  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:07.189571  133241 cri.go:89] found id: ""
	I1210 01:10:07.189601  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.189614  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:07.189622  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:07.189683  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:07.224455  133241 cri.go:89] found id: ""
	I1210 01:10:07.224480  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.224489  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:07.224499  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:07.224512  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:07.240174  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:07.240214  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:07.344027  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:07.344062  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:07.344079  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:07.445219  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:07.445263  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:07.483205  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:07.483238  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:10.034238  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:10.047042  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:10.047105  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:10.078622  133241 cri.go:89] found id: ""
	I1210 01:10:10.078654  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.078666  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:10.078675  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:10.078737  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:10.109353  133241 cri.go:89] found id: ""
	I1210 01:10:10.109379  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.109390  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:10.109398  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:10.109470  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:10.143036  133241 cri.go:89] found id: ""
	I1210 01:10:10.143065  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.143077  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:10.143084  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:10.143150  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:10.174938  133241 cri.go:89] found id: ""
	I1210 01:10:10.174966  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.174975  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:10.174981  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:10.175032  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:10.208680  133241 cri.go:89] found id: ""
	I1210 01:10:10.208709  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.208718  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:10.208724  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:10.208793  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:10.241153  133241 cri.go:89] found id: ""
	I1210 01:10:10.241189  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.241202  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:10.241213  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:10.241290  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:10.279405  133241 cri.go:89] found id: ""
	I1210 01:10:10.279437  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.279448  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:10.279457  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:10.279523  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:10.317915  133241 cri.go:89] found id: ""
	I1210 01:10:10.317943  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.317953  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:10.317964  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:10.317980  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:10.370920  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:10.370955  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:10.385823  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:10.385867  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:10.452746  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:10.452774  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:10.452793  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:10.535218  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:10.535291  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:13.075172  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:13.090707  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:13.090785  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:13.141780  133241 cri.go:89] found id: ""
	I1210 01:10:13.141804  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.141812  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:13.141818  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:13.141869  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:13.172241  133241 cri.go:89] found id: ""
	I1210 01:10:13.172263  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.172271  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:13.172277  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:13.172339  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:13.200378  133241 cri.go:89] found id: ""
	I1210 01:10:13.200401  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.200410  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:13.200415  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:13.200472  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:13.232921  133241 cri.go:89] found id: ""
	I1210 01:10:13.232952  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.232964  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:13.232972  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:13.233088  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:13.265305  133241 cri.go:89] found id: ""
	I1210 01:10:13.265333  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.265344  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:13.265352  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:13.265411  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:13.299192  133241 cri.go:89] found id: ""
	I1210 01:10:13.299216  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.299226  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:13.299233  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:13.299306  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:13.332156  133241 cri.go:89] found id: ""
	I1210 01:10:13.332184  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.332195  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:13.332202  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:13.332259  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:13.365450  133241 cri.go:89] found id: ""
	I1210 01:10:13.365484  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.365498  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:13.365511  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:13.365529  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:13.440807  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:13.440849  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:13.477283  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:13.477325  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:13.527481  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:13.527514  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:13.540146  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:13.540178  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:13.602711  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:16.103789  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:16.116124  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:16.116204  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:16.153057  133241 cri.go:89] found id: ""
	I1210 01:10:16.153082  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.153102  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:16.153109  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:16.153162  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:16.186489  133241 cri.go:89] found id: ""
	I1210 01:10:16.186517  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.186528  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:16.186535  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:16.186613  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:16.216369  133241 cri.go:89] found id: ""
	I1210 01:10:16.216404  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.216415  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:16.216423  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:16.216482  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:16.246254  133241 cri.go:89] found id: ""
	I1210 01:10:16.246282  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.246292  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:16.246299  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:16.246361  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:16.277815  133241 cri.go:89] found id: ""
	I1210 01:10:16.277844  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.277855  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:16.277866  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:16.277931  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:16.312101  133241 cri.go:89] found id: ""
	I1210 01:10:16.312132  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.312141  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:16.312147  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:16.312202  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:16.350273  133241 cri.go:89] found id: ""
	I1210 01:10:16.350299  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.350307  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:16.350313  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:16.350376  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:16.388091  133241 cri.go:89] found id: ""
	I1210 01:10:16.388113  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.388121  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:16.388130  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:16.388150  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:16.456039  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:16.456066  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:16.456085  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:16.534919  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:16.534950  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:16.581598  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:16.581639  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:16.631479  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:16.631515  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:19.143852  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:19.156229  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:19.156300  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:19.186482  133241 cri.go:89] found id: ""
	I1210 01:10:19.186506  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.186514  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:19.186521  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:19.186585  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:19.216945  133241 cri.go:89] found id: ""
	I1210 01:10:19.216967  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.216975  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:19.216983  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:19.217060  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:19.247628  133241 cri.go:89] found id: ""
	I1210 01:10:19.247656  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.247666  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:19.247672  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:19.247719  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:19.281256  133241 cri.go:89] found id: ""
	I1210 01:10:19.281287  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.281297  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:19.281303  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:19.281364  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:19.315123  133241 cri.go:89] found id: ""
	I1210 01:10:19.315156  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.315168  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:19.315176  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:19.315246  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:19.349687  133241 cri.go:89] found id: ""
	I1210 01:10:19.349714  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.349725  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:19.349733  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:19.349797  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:19.381019  133241 cri.go:89] found id: ""
	I1210 01:10:19.381046  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.381058  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:19.381065  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:19.381129  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:19.413983  133241 cri.go:89] found id: ""
	I1210 01:10:19.414023  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.414035  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:19.414048  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:19.414063  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:19.453812  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:19.453848  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:19.504016  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:19.504049  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:19.517665  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:19.517695  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:19.583777  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:19.583807  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:19.583825  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:22.160219  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:22.172908  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:22.172984  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:22.203634  133241 cri.go:89] found id: ""
	I1210 01:10:22.203665  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.203680  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:22.203689  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:22.203754  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:22.233632  133241 cri.go:89] found id: ""
	I1210 01:10:22.233660  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.233671  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:22.233679  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:22.233748  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:22.269679  133241 cri.go:89] found id: ""
	I1210 01:10:22.269704  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.269713  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:22.269719  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:22.269769  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:22.301819  133241 cri.go:89] found id: ""
	I1210 01:10:22.301850  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.301858  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:22.301864  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:22.301914  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:22.337435  133241 cri.go:89] found id: ""
	I1210 01:10:22.337470  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.337479  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:22.337494  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:22.337562  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:22.368920  133241 cri.go:89] found id: ""
	I1210 01:10:22.368944  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.368952  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:22.368957  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:22.369020  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:22.401157  133241 cri.go:89] found id: ""
	I1210 01:10:22.401188  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.401200  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:22.401211  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:22.401277  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:22.436278  133241 cri.go:89] found id: ""
	I1210 01:10:22.436317  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.436330  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:22.436343  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:22.436359  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:22.485320  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:22.485354  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:22.498225  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:22.498253  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:22.559918  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:22.559944  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:22.559961  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:22.636884  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:22.636919  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:25.173302  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:25.185398  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:25.185481  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:25.215003  133241 cri.go:89] found id: ""
	I1210 01:10:25.215030  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.215038  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:25.215044  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:25.215106  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:25.247583  133241 cri.go:89] found id: ""
	I1210 01:10:25.247604  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.247613  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:25.247620  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:25.247679  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:25.282125  133241 cri.go:89] found id: ""
	I1210 01:10:25.282150  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.282158  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:25.282163  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:25.282220  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:25.317560  133241 cri.go:89] found id: ""
	I1210 01:10:25.317590  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.317599  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:25.317605  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:25.317666  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:25.354392  133241 cri.go:89] found id: ""
	I1210 01:10:25.354418  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.354430  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:25.354441  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:25.354510  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:25.392349  133241 cri.go:89] found id: ""
	I1210 01:10:25.392375  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.392384  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:25.392390  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:25.392442  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:25.429665  133241 cri.go:89] found id: ""
	I1210 01:10:25.429692  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.429702  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:25.429709  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:25.429766  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:25.466437  133241 cri.go:89] found id: ""
	I1210 01:10:25.466463  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.466476  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:25.466488  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:25.466503  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:25.480846  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:25.480885  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:25.548828  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:25.548861  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:25.548877  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:25.626942  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:25.626985  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:25.664081  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:25.664120  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:28.219032  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:28.233820  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:28.233886  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:28.267033  133241 cri.go:89] found id: ""
	I1210 01:10:28.267061  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.267072  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:28.267079  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:28.267133  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:28.304241  133241 cri.go:89] found id: ""
	I1210 01:10:28.304268  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.304276  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:28.304282  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:28.304329  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:28.339783  133241 cri.go:89] found id: ""
	I1210 01:10:28.339810  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.339817  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:28.339824  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:28.339897  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:28.371890  133241 cri.go:89] found id: ""
	I1210 01:10:28.371944  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.371957  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:28.371965  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:28.372033  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:28.409995  133241 cri.go:89] found id: ""
	I1210 01:10:28.410031  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.410042  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:28.410050  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:28.410122  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:28.443817  133241 cri.go:89] found id: ""
	I1210 01:10:28.443854  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.443866  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:28.443874  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:28.443943  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:28.476813  133241 cri.go:89] found id: ""
	I1210 01:10:28.476842  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.476850  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:28.476856  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:28.476918  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:28.509092  133241 cri.go:89] found id: ""
	I1210 01:10:28.509119  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.509129  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:28.509147  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:28.509166  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:28.582990  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:28.583021  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:28.624120  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:28.624152  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:28.673901  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:28.673942  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:28.686654  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:28.686684  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:28.754914  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:31.256019  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:31.269297  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:31.269374  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:31.306032  133241 cri.go:89] found id: ""
	I1210 01:10:31.306063  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.306074  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:31.306082  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:31.306149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:31.339930  133241 cri.go:89] found id: ""
	I1210 01:10:31.339964  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.339976  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:31.339984  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:31.340049  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:31.371820  133241 cri.go:89] found id: ""
	I1210 01:10:31.371853  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.371865  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:31.371872  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:31.371929  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:31.406853  133241 cri.go:89] found id: ""
	I1210 01:10:31.406880  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.406888  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:31.406895  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:31.406973  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:31.441927  133241 cri.go:89] found id: ""
	I1210 01:10:31.441964  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.441983  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:31.441993  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:31.442059  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:31.475302  133241 cri.go:89] found id: ""
	I1210 01:10:31.475335  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.475347  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:31.475356  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:31.475422  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:31.508445  133241 cri.go:89] found id: ""
	I1210 01:10:31.508479  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.508489  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:31.508495  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:31.508549  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:31.542658  133241 cri.go:89] found id: ""
	I1210 01:10:31.542686  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.542694  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:31.542704  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:31.542720  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:31.591393  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:31.591432  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:31.604124  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:31.604152  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:31.670342  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:31.670381  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:31.670401  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:31.755216  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:31.755273  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:34.307218  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:34.321878  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:34.321951  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:34.355191  133241 cri.go:89] found id: ""
	I1210 01:10:34.355230  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.355238  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:34.355244  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:34.355300  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:34.392397  133241 cri.go:89] found id: ""
	I1210 01:10:34.392432  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.392445  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:34.392453  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:34.392522  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:34.424468  133241 cri.go:89] found id: ""
	I1210 01:10:34.424496  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.424513  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:34.424519  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:34.424568  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:34.456966  133241 cri.go:89] found id: ""
	I1210 01:10:34.456990  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.457000  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:34.457006  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:34.457057  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:34.491830  133241 cri.go:89] found id: ""
	I1210 01:10:34.491863  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.491874  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:34.491882  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:34.491949  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:34.523409  133241 cri.go:89] found id: ""
	I1210 01:10:34.523441  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.523455  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:34.523464  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:34.523520  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:34.555092  133241 cri.go:89] found id: ""
	I1210 01:10:34.555125  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.555136  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:34.555143  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:34.555211  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:34.585491  133241 cri.go:89] found id: ""
	I1210 01:10:34.585521  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.585530  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:34.585540  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:34.585553  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:34.598250  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:34.598281  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:34.662759  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:34.662784  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:34.662797  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:34.740495  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:34.740537  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:34.777192  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:34.777231  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:37.329212  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:37.342322  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:37.342397  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:37.374083  133241 cri.go:89] found id: ""
	I1210 01:10:37.374114  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.374124  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:37.374133  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:37.374202  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:37.404838  133241 cri.go:89] found id: ""
	I1210 01:10:37.404872  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.404880  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:37.404886  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:37.404948  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:37.439471  133241 cri.go:89] found id: ""
	I1210 01:10:37.439503  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.439515  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:37.439523  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:37.439598  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:37.473725  133241 cri.go:89] found id: ""
	I1210 01:10:37.473756  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.473765  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:37.473770  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:37.473822  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:37.507449  133241 cri.go:89] found id: ""
	I1210 01:10:37.507478  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.507491  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:37.507498  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:37.507565  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:37.538432  133241 cri.go:89] found id: ""
	I1210 01:10:37.538468  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.538479  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:37.538490  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:37.538583  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:37.571690  133241 cri.go:89] found id: ""
	I1210 01:10:37.571716  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.571724  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:37.571730  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:37.571787  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:37.606988  133241 cri.go:89] found id: ""
	I1210 01:10:37.607017  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.607026  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:37.607036  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:37.607048  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:37.655260  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:37.655290  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:37.667647  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:37.667672  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:37.734898  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:37.734955  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:37.734971  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:37.823654  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:37.823690  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:40.361513  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:40.374995  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:40.375054  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:40.407043  133241 cri.go:89] found id: ""
	I1210 01:10:40.407077  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.407086  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:40.407091  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:40.407146  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:40.438613  133241 cri.go:89] found id: ""
	I1210 01:10:40.438644  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.438655  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:40.438663  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:40.438725  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:40.468747  133241 cri.go:89] found id: ""
	I1210 01:10:40.468781  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.468794  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:40.468801  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:40.468873  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:40.501670  133241 cri.go:89] found id: ""
	I1210 01:10:40.501700  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.501708  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:40.501714  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:40.501762  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:40.531671  133241 cri.go:89] found id: ""
	I1210 01:10:40.531694  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.531704  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:40.531712  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:40.531769  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:40.562804  133241 cri.go:89] found id: ""
	I1210 01:10:40.562827  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.562836  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:40.562847  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:40.562909  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:40.593286  133241 cri.go:89] found id: ""
	I1210 01:10:40.593309  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.593318  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:40.593323  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:40.593369  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:40.624387  133241 cri.go:89] found id: ""
	I1210 01:10:40.624424  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.624438  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:40.624452  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:40.624479  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:40.636616  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:40.636643  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:40.703044  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:40.703071  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:40.703089  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:40.782186  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:40.782220  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:40.824410  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:40.824434  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:43.377460  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:43.391624  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:43.391704  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:43.424454  133241 cri.go:89] found id: ""
	I1210 01:10:43.424489  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.424499  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:43.424505  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:43.424570  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:43.454067  133241 cri.go:89] found id: ""
	I1210 01:10:43.454094  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.454102  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:43.454108  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:43.454160  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:43.485905  133241 cri.go:89] found id: ""
	I1210 01:10:43.485938  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.485949  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:43.485956  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:43.486021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:43.516402  133241 cri.go:89] found id: ""
	I1210 01:10:43.516427  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.516435  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:43.516447  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:43.516521  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:43.549049  133241 cri.go:89] found id: ""
	I1210 01:10:43.549102  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.549114  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:43.549124  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:43.549181  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:43.582610  133241 cri.go:89] found id: ""
	I1210 01:10:43.582641  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.582652  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:43.582661  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:43.582720  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:43.614392  133241 cri.go:89] found id: ""
	I1210 01:10:43.614424  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.614435  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:43.614442  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:43.614507  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:43.646797  133241 cri.go:89] found id: ""
	I1210 01:10:43.646830  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.646842  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:43.646855  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:43.646872  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:43.682884  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:43.682921  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:43.739117  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:43.739159  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:43.754008  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:43.754047  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:43.825110  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:43.825140  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:43.825156  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:46.401040  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:46.414417  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:46.414515  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:46.446832  133241 cri.go:89] found id: ""
	I1210 01:10:46.446861  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.446871  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:46.446879  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:46.446945  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:46.480534  133241 cri.go:89] found id: ""
	I1210 01:10:46.480566  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.480577  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:46.480584  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:46.480649  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:46.512706  133241 cri.go:89] found id: ""
	I1210 01:10:46.512735  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.512745  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:46.512752  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:46.512818  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:46.545769  133241 cri.go:89] found id: ""
	I1210 01:10:46.545803  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.545815  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:46.545823  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:46.545889  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:46.575715  133241 cri.go:89] found id: ""
	I1210 01:10:46.575750  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.575762  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:46.575769  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:46.575834  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:46.605133  133241 cri.go:89] found id: ""
	I1210 01:10:46.605164  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.605175  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:46.605183  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:46.605235  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:46.635536  133241 cri.go:89] found id: ""
	I1210 01:10:46.635571  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.635582  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:46.635589  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:46.635650  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:46.665579  133241 cri.go:89] found id: ""
	I1210 01:10:46.665608  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.665617  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:46.665627  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:46.665637  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:46.749766  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:46.749806  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:46.788690  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:46.788725  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:46.841860  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:46.841888  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:46.870621  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:46.870651  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:46.943532  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:49.444707  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:49.457003  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:49.457071  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:49.489757  133241 cri.go:89] found id: ""
	I1210 01:10:49.489791  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.489802  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:49.489809  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:49.489859  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:49.519808  133241 cri.go:89] found id: ""
	I1210 01:10:49.519832  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.519839  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:49.519844  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:49.519895  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:49.552725  133241 cri.go:89] found id: ""
	I1210 01:10:49.552748  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.552756  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:49.552762  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:49.552816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:49.583657  133241 cri.go:89] found id: ""
	I1210 01:10:49.583686  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.583699  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:49.583710  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:49.583771  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:49.614520  133241 cri.go:89] found id: ""
	I1210 01:10:49.614547  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.614569  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:49.614579  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:49.614644  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:49.646739  133241 cri.go:89] found id: ""
	I1210 01:10:49.646788  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.646800  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:49.646811  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:49.646871  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:49.680156  133241 cri.go:89] found id: ""
	I1210 01:10:49.680184  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.680195  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:49.680203  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:49.680271  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:49.711052  133241 cri.go:89] found id: ""
	I1210 01:10:49.711090  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.711103  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:49.711115  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:49.711133  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:49.765139  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:49.765173  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:49.777581  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:49.777612  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:49.842857  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:49.842882  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:49.842897  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:49.923492  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:49.923529  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:52.465282  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:52.478468  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:52.478535  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:52.514379  133241 cri.go:89] found id: ""
	I1210 01:10:52.514411  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.514420  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:52.514426  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:52.514481  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:52.545952  133241 cri.go:89] found id: ""
	I1210 01:10:52.545981  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.545991  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:52.545999  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:52.546063  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:52.581959  133241 cri.go:89] found id: ""
	I1210 01:10:52.581986  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.581995  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:52.582003  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:52.582109  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:52.634648  133241 cri.go:89] found id: ""
	I1210 01:10:52.634674  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.634686  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:52.634693  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:52.634753  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:52.668485  133241 cri.go:89] found id: ""
	I1210 01:10:52.668509  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.668518  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:52.668524  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:52.668587  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:52.702030  133241 cri.go:89] found id: ""
	I1210 01:10:52.702058  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.702067  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:52.702074  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:52.702139  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:52.736618  133241 cri.go:89] found id: ""
	I1210 01:10:52.736647  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.736655  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:52.736661  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:52.736728  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:52.769400  133241 cri.go:89] found id: ""
	I1210 01:10:52.769427  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.769436  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:52.769444  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:52.769462  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:52.808900  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:52.808936  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:52.861032  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:52.861067  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:52.874251  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:52.874281  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:52.946117  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:52.946145  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:52.946174  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:55.526812  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:55.541146  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:55.541232  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:55.582382  133241 cri.go:89] found id: ""
	I1210 01:10:55.582414  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.582424  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:55.582430  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:55.582483  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:55.620756  133241 cri.go:89] found id: ""
	I1210 01:10:55.620781  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.620790  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:55.620795  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:55.620865  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:55.657136  133241 cri.go:89] found id: ""
	I1210 01:10:55.657173  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.657184  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:55.657192  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:55.657253  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:55.691809  133241 cri.go:89] found id: ""
	I1210 01:10:55.691836  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.691844  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:55.691850  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:55.691901  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:55.725747  133241 cri.go:89] found id: ""
	I1210 01:10:55.725782  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.725794  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:55.725802  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:55.725870  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:55.758656  133241 cri.go:89] found id: ""
	I1210 01:10:55.758686  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.758697  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:55.758704  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:55.758766  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:55.791407  133241 cri.go:89] found id: ""
	I1210 01:10:55.791437  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.791447  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:55.791453  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:55.791522  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:55.823238  133241 cri.go:89] found id: ""
	I1210 01:10:55.823259  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.823269  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:55.823277  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:55.823288  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:55.858051  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:55.858090  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:55.910896  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:55.910928  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:55.923792  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:55.923814  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:55.994264  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:55.994283  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:55.994297  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:58.570410  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:58.582632  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:58.582709  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:58.614706  133241 cri.go:89] found id: ""
	I1210 01:10:58.614741  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.614752  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:58.614759  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:58.614820  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:58.645853  133241 cri.go:89] found id: ""
	I1210 01:10:58.645880  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.645888  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:58.645893  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:58.645946  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:58.681278  133241 cri.go:89] found id: ""
	I1210 01:10:58.681305  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.681313  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:58.681319  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:58.681376  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:58.715312  133241 cri.go:89] found id: ""
	I1210 01:10:58.715344  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.715356  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:58.715364  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:58.715434  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:58.753150  133241 cri.go:89] found id: ""
	I1210 01:10:58.753182  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.753193  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:58.753201  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:58.753275  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:58.792337  133241 cri.go:89] found id: ""
	I1210 01:10:58.792363  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.792371  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:58.792377  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:58.792424  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:58.824538  133241 cri.go:89] found id: ""
	I1210 01:10:58.824562  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.824569  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:58.824575  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:58.824626  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:58.859699  133241 cri.go:89] found id: ""
	I1210 01:10:58.859733  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.859745  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:58.859755  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:58.859768  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:58.874557  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:58.874607  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:58.942377  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:58.942399  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:58.942413  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:59.020700  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:59.020743  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:59.092780  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:59.092820  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:01.656942  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:01.670706  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:01.670790  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:01.704182  133241 cri.go:89] found id: ""
	I1210 01:11:01.704222  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.704235  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:01.704242  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:01.704295  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:01.737176  133241 cri.go:89] found id: ""
	I1210 01:11:01.737207  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.737216  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:01.737222  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:01.737279  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:01.771891  133241 cri.go:89] found id: ""
	I1210 01:11:01.771924  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.771935  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:01.771943  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:01.772001  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:01.804964  133241 cri.go:89] found id: ""
	I1210 01:11:01.804994  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.805005  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:01.805026  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:01.805101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:01.837156  133241 cri.go:89] found id: ""
	I1210 01:11:01.837184  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.837195  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:01.837203  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:01.837260  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:01.866759  133241 cri.go:89] found id: ""
	I1210 01:11:01.866783  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.866793  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:01.866802  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:01.866868  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:01.897349  133241 cri.go:89] found id: ""
	I1210 01:11:01.897377  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.897387  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:01.897394  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:01.897452  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:01.928390  133241 cri.go:89] found id: ""
	I1210 01:11:01.928419  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.928430  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:01.928442  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:01.928462  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:01.995531  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:01.995558  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:01.995572  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:02.073144  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:02.073178  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:02.107235  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:02.107266  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:02.159959  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:02.159993  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:04.672775  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:04.686495  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:04.686604  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:04.720867  133241 cri.go:89] found id: ""
	I1210 01:11:04.720977  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.721005  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:04.721034  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:04.721143  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:04.757796  133241 cri.go:89] found id: ""
	I1210 01:11:04.757823  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.757831  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:04.757837  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:04.757896  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:04.799823  133241 cri.go:89] found id: ""
	I1210 01:11:04.799848  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.799856  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:04.799861  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:04.799921  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:04.848259  133241 cri.go:89] found id: ""
	I1210 01:11:04.848291  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.848303  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:04.848312  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:04.848392  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:04.898530  133241 cri.go:89] found id: ""
	I1210 01:11:04.898583  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.898596  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:04.898605  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:04.898673  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:04.935954  133241 cri.go:89] found id: ""
	I1210 01:11:04.935979  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.935987  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:04.935992  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:04.936037  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:04.970503  133241 cri.go:89] found id: ""
	I1210 01:11:04.970531  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.970538  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:04.970544  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:04.970627  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:05.003257  133241 cri.go:89] found id: ""
	I1210 01:11:05.003280  133241 logs.go:282] 0 containers: []
	W1210 01:11:05.003289  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:05.003298  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:05.003311  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:05.053816  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:05.053849  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:05.066024  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:05.066056  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:05.129515  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:05.129542  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:05.129559  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:05.203823  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:05.203861  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:07.743773  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:07.756948  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:07.757021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:07.790298  133241 cri.go:89] found id: ""
	I1210 01:11:07.790326  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.790334  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:07.790341  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:07.790432  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:07.822653  133241 cri.go:89] found id: ""
	I1210 01:11:07.822682  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.822693  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:07.822700  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:07.822754  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:07.856125  133241 cri.go:89] found id: ""
	I1210 01:11:07.856160  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.856171  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:07.856178  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:07.856247  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:07.888297  133241 cri.go:89] found id: ""
	I1210 01:11:07.888321  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.888329  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:07.888336  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:07.888394  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:07.919131  133241 cri.go:89] found id: ""
	I1210 01:11:07.919159  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.919170  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:07.919177  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:07.919245  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:07.954289  133241 cri.go:89] found id: ""
	I1210 01:11:07.954320  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.954332  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:07.954340  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:07.954396  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:07.985447  133241 cri.go:89] found id: ""
	I1210 01:11:07.985482  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.985497  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:07.985505  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:07.985560  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:08.016461  133241 cri.go:89] found id: ""
	I1210 01:11:08.016491  133241 logs.go:282] 0 containers: []
	W1210 01:11:08.016504  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:08.016516  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:08.016534  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:08.051346  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:08.051386  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:08.101708  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:08.101741  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:08.113883  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:08.113912  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:08.174656  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:08.174681  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:08.174696  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:10.751754  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:10.768007  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:10.768071  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:10.814141  133241 cri.go:89] found id: ""
	I1210 01:11:10.814167  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.814177  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:10.814187  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:10.814255  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:10.864355  133241 cri.go:89] found id: ""
	I1210 01:11:10.864379  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.864387  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:10.864392  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:10.864464  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:10.917533  133241 cri.go:89] found id: ""
	I1210 01:11:10.917563  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.917572  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:10.917579  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:10.917644  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:10.949555  133241 cri.go:89] found id: ""
	I1210 01:11:10.949589  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.949601  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:10.949609  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:10.949668  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:10.982997  133241 cri.go:89] found id: ""
	I1210 01:11:10.983022  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.983030  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:10.983036  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:10.983101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:11.016318  133241 cri.go:89] found id: ""
	I1210 01:11:11.016348  133241 logs.go:282] 0 containers: []
	W1210 01:11:11.016359  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:11.016366  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:11.016460  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:11.045980  133241 cri.go:89] found id: ""
	I1210 01:11:11.046004  133241 logs.go:282] 0 containers: []
	W1210 01:11:11.046012  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:11.046018  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:11.046067  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:11.074303  133241 cri.go:89] found id: ""
	I1210 01:11:11.074329  133241 logs.go:282] 0 containers: []
	W1210 01:11:11.074336  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:11.074346  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:11.074357  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:11.108874  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:11.108907  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:11.156642  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:11.156672  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:11.168505  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:11.168527  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:11.239949  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:11.239976  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:11.239994  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:13.828538  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:13.841876  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:13.841929  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:13.872854  133241 cri.go:89] found id: ""
	I1210 01:11:13.872884  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.872896  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:13.872904  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:13.872955  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:13.903759  133241 cri.go:89] found id: ""
	I1210 01:11:13.903790  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.903803  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:13.903812  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:13.903877  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:13.938898  133241 cri.go:89] found id: ""
	I1210 01:11:13.938921  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.938929  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:13.938934  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:13.938992  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:13.979322  133241 cri.go:89] found id: ""
	I1210 01:11:13.979343  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.979351  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:13.979358  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:13.979419  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:14.012959  133241 cri.go:89] found id: ""
	I1210 01:11:14.012984  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.012993  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:14.012999  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:14.013048  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:14.050248  133241 cri.go:89] found id: ""
	I1210 01:11:14.050274  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.050282  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:14.050288  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:14.050337  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:14.086029  133241 cri.go:89] found id: ""
	I1210 01:11:14.086061  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.086072  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:14.086080  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:14.086149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:14.119966  133241 cri.go:89] found id: ""
	I1210 01:11:14.119994  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.120002  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:14.120012  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:14.120025  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:14.133378  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:14.133406  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:14.199060  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:14.199093  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:14.199108  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:14.282056  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:14.282089  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:14.321155  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:14.321182  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:16.871040  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:16.882350  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:16.882417  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:16.911877  133241 cri.go:89] found id: ""
	I1210 01:11:16.911910  133241 logs.go:282] 0 containers: []
	W1210 01:11:16.911922  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:16.911930  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:16.911993  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:16.946898  133241 cri.go:89] found id: ""
	I1210 01:11:16.946931  133241 logs.go:282] 0 containers: []
	W1210 01:11:16.946945  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:16.946952  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:16.947021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:16.979154  133241 cri.go:89] found id: ""
	I1210 01:11:16.979185  133241 logs.go:282] 0 containers: []
	W1210 01:11:16.979196  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:16.979209  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:16.979293  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:17.008977  133241 cri.go:89] found id: ""
	I1210 01:11:17.009010  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.009021  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:17.009028  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:17.009093  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:17.041399  133241 cri.go:89] found id: ""
	I1210 01:11:17.041431  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.041440  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:17.041446  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:17.041505  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:17.074254  133241 cri.go:89] found id: ""
	I1210 01:11:17.074284  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.074295  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:17.074305  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:17.074385  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:17.104982  133241 cri.go:89] found id: ""
	I1210 01:11:17.105015  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.105025  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:17.105033  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:17.105094  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:17.135240  133241 cri.go:89] found id: ""
	I1210 01:11:17.135265  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.135275  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:17.135286  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:17.135298  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:17.186952  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:17.187004  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:17.201444  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:17.201472  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:17.272210  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:17.272229  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:17.272245  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:17.355218  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:17.355256  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:19.892863  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:19.905069  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:19.905138  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:19.943515  133241 cri.go:89] found id: ""
	I1210 01:11:19.943544  133241 logs.go:282] 0 containers: []
	W1210 01:11:19.943557  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:19.943566  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:19.943629  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:19.974474  133241 cri.go:89] found id: ""
	I1210 01:11:19.974499  133241 logs.go:282] 0 containers: []
	W1210 01:11:19.974509  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:19.974517  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:19.974597  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:20.008980  133241 cri.go:89] found id: ""
	I1210 01:11:20.009011  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.009023  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:20.009030  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:20.009097  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:20.040655  133241 cri.go:89] found id: ""
	I1210 01:11:20.040681  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.040690  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:20.040696  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:20.040745  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:20.073761  133241 cri.go:89] found id: ""
	I1210 01:11:20.073788  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.073799  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:20.073806  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:20.073873  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:20.104381  133241 cri.go:89] found id: ""
	I1210 01:11:20.104410  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.104421  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:20.104429  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:20.104489  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:20.138130  133241 cri.go:89] found id: ""
	I1210 01:11:20.138158  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.138167  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:20.138173  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:20.138229  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:20.166883  133241 cri.go:89] found id: ""
	I1210 01:11:20.166908  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.166916  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:20.166926  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:20.166940  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:20.199437  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:20.199470  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:20.247384  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:20.247418  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:20.260363  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:20.260392  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:20.330260  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:20.330283  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:20.330299  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:22.912818  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:22.925241  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:22.925316  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:22.957975  133241 cri.go:89] found id: ""
	I1210 01:11:22.958003  133241 logs.go:282] 0 containers: []
	W1210 01:11:22.958015  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:22.958023  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:22.958087  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:22.991067  133241 cri.go:89] found id: ""
	I1210 01:11:22.991098  133241 logs.go:282] 0 containers: []
	W1210 01:11:22.991109  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:22.991117  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:22.991177  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:23.022191  133241 cri.go:89] found id: ""
	I1210 01:11:23.022280  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.022297  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:23.022307  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:23.022373  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:23.055399  133241 cri.go:89] found id: ""
	I1210 01:11:23.055427  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.055435  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:23.055440  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:23.055504  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:23.085084  133241 cri.go:89] found id: ""
	I1210 01:11:23.085114  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.085126  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:23.085133  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:23.085195  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:23.114896  133241 cri.go:89] found id: ""
	I1210 01:11:23.114921  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.114929  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:23.114935  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:23.114995  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:23.146419  133241 cri.go:89] found id: ""
	I1210 01:11:23.146450  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.146463  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:23.146470  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:23.146546  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:23.178747  133241 cri.go:89] found id: ""
	I1210 01:11:23.178774  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.178782  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:23.178792  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:23.178804  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:23.230574  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:23.230609  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:23.242622  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:23.242649  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:23.315830  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:23.315850  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:23.315862  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:23.394054  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:23.394091  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:25.930799  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:25.943287  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:25.943351  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:25.975836  133241 cri.go:89] found id: ""
	I1210 01:11:25.975866  133241 logs.go:282] 0 containers: []
	W1210 01:11:25.975877  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:25.975884  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:25.975948  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:26.008518  133241 cri.go:89] found id: ""
	I1210 01:11:26.008545  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.008553  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:26.008560  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:26.008607  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:26.041953  133241 cri.go:89] found id: ""
	I1210 01:11:26.041992  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.042002  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:26.042009  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:26.042076  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:26.071782  133241 cri.go:89] found id: ""
	I1210 01:11:26.071809  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.071821  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:26.071829  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:26.071894  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:26.101051  133241 cri.go:89] found id: ""
	I1210 01:11:26.101075  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.101084  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:26.101089  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:26.101135  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:26.135274  133241 cri.go:89] found id: ""
	I1210 01:11:26.135300  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.135308  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:26.135315  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:26.135368  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:26.168190  133241 cri.go:89] found id: ""
	I1210 01:11:26.168216  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.168224  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:26.168230  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:26.168293  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:26.198453  133241 cri.go:89] found id: ""
	I1210 01:11:26.198482  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.198492  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:26.198505  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:26.198524  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:26.211436  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:26.211460  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:26.273940  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:26.273964  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:26.273980  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:26.353198  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:26.353232  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:26.389823  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:26.389857  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:28.940375  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:28.952619  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:28.952676  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:28.984886  133241 cri.go:89] found id: ""
	I1210 01:11:28.984914  133241 logs.go:282] 0 containers: []
	W1210 01:11:28.984923  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:28.984929  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:28.984978  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:29.015424  133241 cri.go:89] found id: ""
	I1210 01:11:29.015453  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.015463  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:29.015469  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:29.015520  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:29.045941  133241 cri.go:89] found id: ""
	I1210 01:11:29.045977  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.045989  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:29.045997  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:29.046065  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:29.077346  133241 cri.go:89] found id: ""
	I1210 01:11:29.077375  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.077384  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:29.077389  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:29.077442  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:29.109825  133241 cri.go:89] found id: ""
	I1210 01:11:29.109861  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.109873  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:29.109880  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:29.109946  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:29.141601  133241 cri.go:89] found id: ""
	I1210 01:11:29.141633  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.141645  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:29.141656  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:29.141720  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:29.172711  133241 cri.go:89] found id: ""
	I1210 01:11:29.172747  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.172758  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:29.172766  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:29.172830  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:29.205247  133241 cri.go:89] found id: ""
	I1210 01:11:29.205272  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.205283  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:29.205296  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:29.205310  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:29.255917  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:29.255954  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:29.269246  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:29.269276  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:29.339509  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:29.339535  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:29.339550  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:29.414320  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:29.414358  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:31.950667  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:31.963020  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:31.963083  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:31.994537  133241 cri.go:89] found id: ""
	I1210 01:11:31.994586  133241 logs.go:282] 0 containers: []
	W1210 01:11:31.994598  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:31.994606  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:31.994672  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:32.028601  133241 cri.go:89] found id: ""
	I1210 01:11:32.028632  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.028643  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:32.028651  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:32.028710  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:32.060238  133241 cri.go:89] found id: ""
	I1210 01:11:32.060265  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.060273  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:32.060280  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:32.060344  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:32.094421  133241 cri.go:89] found id: ""
	I1210 01:11:32.094446  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.094454  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:32.094460  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:32.094509  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:32.128237  133241 cri.go:89] found id: ""
	I1210 01:11:32.128266  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.128277  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:32.128285  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:32.128355  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:32.163139  133241 cri.go:89] found id: ""
	I1210 01:11:32.163163  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.163172  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:32.163179  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:32.163237  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:32.194077  133241 cri.go:89] found id: ""
	I1210 01:11:32.194108  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.194119  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:32.194126  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:32.194187  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:32.224914  133241 cri.go:89] found id: ""
	I1210 01:11:32.224941  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.224952  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:32.224964  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:32.224980  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:32.275194  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:32.275230  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:32.287642  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:32.287670  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:32.350922  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:32.350953  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:32.350971  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:32.431573  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:32.431610  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:34.969741  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:34.982487  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:34.982541  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:35.015370  133241 cri.go:89] found id: ""
	I1210 01:11:35.015408  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.015419  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:35.015428  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:35.015494  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:35.047381  133241 cri.go:89] found id: ""
	I1210 01:11:35.047418  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.047430  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:35.047437  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:35.047501  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:35.077282  133241 cri.go:89] found id: ""
	I1210 01:11:35.077305  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.077314  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:35.077320  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:35.077380  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:35.107625  133241 cri.go:89] found id: ""
	I1210 01:11:35.107653  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.107664  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:35.107671  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:35.107723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:35.137919  133241 cri.go:89] found id: ""
	I1210 01:11:35.137949  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.137962  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:35.137970  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:35.138037  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:35.170914  133241 cri.go:89] found id: ""
	I1210 01:11:35.170939  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.170947  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:35.170962  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:35.171021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:35.201719  133241 cri.go:89] found id: ""
	I1210 01:11:35.201747  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.201755  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:35.201761  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:35.201821  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:35.230544  133241 cri.go:89] found id: ""
	I1210 01:11:35.230582  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.230595  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:35.230607  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:35.230622  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:35.243184  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:35.243210  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:35.311888  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:35.311915  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:35.311931  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:35.387377  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:35.387411  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:35.424087  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:35.424121  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:37.977530  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:37.989741  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:37.989811  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:38.023765  133241 cri.go:89] found id: ""
	I1210 01:11:38.023789  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.023799  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:38.023808  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:38.023871  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:38.060456  133241 cri.go:89] found id: ""
	I1210 01:11:38.060487  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.060498  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:38.060505  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:38.060558  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:38.092589  133241 cri.go:89] found id: ""
	I1210 01:11:38.092612  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.092620  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:38.092626  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:38.092676  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:38.126075  133241 cri.go:89] found id: ""
	I1210 01:11:38.126115  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.126127  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:38.126137  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:38.126216  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:38.158861  133241 cri.go:89] found id: ""
	I1210 01:11:38.158892  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.158905  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:38.158911  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:38.158966  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:38.189136  133241 cri.go:89] found id: ""
	I1210 01:11:38.189164  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.189172  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:38.189178  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:38.189227  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:38.220497  133241 cri.go:89] found id: ""
	I1210 01:11:38.220522  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.220530  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:38.220536  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:38.220585  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:38.253480  133241 cri.go:89] found id: ""
	I1210 01:11:38.253515  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.253527  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:38.253539  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:38.253554  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:38.334967  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:38.335006  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:38.375521  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:38.375551  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:38.429375  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:38.429419  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:38.442488  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:38.442527  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:38.504243  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:41.005015  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:41.018073  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:41.018149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:41.049377  133241 cri.go:89] found id: ""
	I1210 01:11:41.049409  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.049421  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:41.049429  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:41.049495  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:41.080430  133241 cri.go:89] found id: ""
	I1210 01:11:41.080466  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.080476  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:41.080482  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:41.080543  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:41.113179  133241 cri.go:89] found id: ""
	I1210 01:11:41.113210  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.113222  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:41.113229  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:41.113298  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:41.144493  133241 cri.go:89] found id: ""
	I1210 01:11:41.144523  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.144535  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:41.144545  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:41.144612  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:41.174786  133241 cri.go:89] found id: ""
	I1210 01:11:41.174818  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.174828  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:41.174835  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:41.174903  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:41.205010  133241 cri.go:89] found id: ""
	I1210 01:11:41.205050  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.205063  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:41.205072  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:41.205142  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:41.236095  133241 cri.go:89] found id: ""
	I1210 01:11:41.236120  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.236131  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:41.236138  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:41.236200  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:41.267610  133241 cri.go:89] found id: ""
	I1210 01:11:41.267639  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.267654  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:41.267665  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:41.267681  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:41.302639  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:41.302669  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:41.352311  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:41.352343  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:41.365111  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:41.365140  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:41.434174  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:41.434197  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:41.434214  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:44.018219  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:44.030886  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:44.030961  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:44.072932  133241 cri.go:89] found id: ""
	I1210 01:11:44.072954  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.072962  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:44.072968  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:44.073015  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:44.110425  133241 cri.go:89] found id: ""
	I1210 01:11:44.110456  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.110466  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:44.110473  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:44.110539  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:44.148811  133241 cri.go:89] found id: ""
	I1210 01:11:44.148837  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.148848  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:44.148855  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:44.148922  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:44.184181  133241 cri.go:89] found id: ""
	I1210 01:11:44.184205  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.184213  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:44.184219  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:44.184268  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:44.213545  133241 cri.go:89] found id: ""
	I1210 01:11:44.213578  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.213590  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:44.213597  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:44.213658  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:44.246979  133241 cri.go:89] found id: ""
	I1210 01:11:44.247012  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.247024  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:44.247032  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:44.247095  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:44.280902  133241 cri.go:89] found id: ""
	I1210 01:11:44.280939  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.280950  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:44.280958  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:44.281035  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:44.310824  133241 cri.go:89] found id: ""
	I1210 01:11:44.310848  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.310859  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:44.310870  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:44.310887  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:44.389324  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:44.389354  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:44.425351  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:44.425388  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:44.478151  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:44.478197  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:44.491139  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:44.491171  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:44.552150  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:47.052917  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:47.065698  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:47.065764  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:47.098483  133241 cri.go:89] found id: ""
	I1210 01:11:47.098518  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.098530  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:47.098538  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:47.098617  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:47.129042  133241 cri.go:89] found id: ""
	I1210 01:11:47.129073  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.129082  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:47.129088  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:47.129157  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:47.160050  133241 cri.go:89] found id: ""
	I1210 01:11:47.160083  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.160094  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:47.160101  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:47.160167  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:47.190078  133241 cri.go:89] found id: ""
	I1210 01:11:47.190111  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.190120  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:47.190126  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:47.190180  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:47.218975  133241 cri.go:89] found id: ""
	I1210 01:11:47.219007  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.219020  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:47.219028  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:47.219088  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:47.248644  133241 cri.go:89] found id: ""
	I1210 01:11:47.248679  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.248689  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:47.248694  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:47.248743  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:47.284306  133241 cri.go:89] found id: ""
	I1210 01:11:47.284332  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.284339  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:47.284345  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:47.284394  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:47.314682  133241 cri.go:89] found id: ""
	I1210 01:11:47.314704  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.314712  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:47.314721  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:47.314733  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:47.365334  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:47.365364  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:47.378184  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:47.378215  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:47.445591  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:47.445619  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:47.445642  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:47.523176  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:47.523214  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:50.059060  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:50.071413  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:50.071489  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:50.104600  133241 cri.go:89] found id: ""
	I1210 01:11:50.104632  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.104644  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:50.104652  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:50.104715  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:50.136915  133241 cri.go:89] found id: ""
	I1210 01:11:50.136947  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.136957  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:50.136968  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:50.137038  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:50.172552  133241 cri.go:89] found id: ""
	I1210 01:11:50.172582  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.172593  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:50.172604  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:50.172668  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:50.202583  133241 cri.go:89] found id: ""
	I1210 01:11:50.202613  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.202626  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:50.202634  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:50.202696  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:50.232446  133241 cri.go:89] found id: ""
	I1210 01:11:50.232473  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.232483  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:50.232491  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:50.232555  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:50.271296  133241 cri.go:89] found id: ""
	I1210 01:11:50.271321  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.271332  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:50.271340  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:50.271404  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:50.304185  133241 cri.go:89] found id: ""
	I1210 01:11:50.304216  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.304227  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:50.304235  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:50.304298  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:50.338004  133241 cri.go:89] found id: ""
	I1210 01:11:50.338030  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.338041  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:50.338051  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:50.338066  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:50.374374  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:50.374403  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:50.427315  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:50.427346  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:50.439862  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:50.439890  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:50.505410  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:50.505441  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:50.505458  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:53.081065  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:53.093760  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:53.093816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:53.126125  133241 cri.go:89] found id: ""
	I1210 01:11:53.126160  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.126172  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:53.126180  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:53.126252  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:53.157694  133241 cri.go:89] found id: ""
	I1210 01:11:53.157719  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.157727  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:53.157732  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:53.157787  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:53.188784  133241 cri.go:89] found id: ""
	I1210 01:11:53.188812  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.188820  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:53.188826  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:53.188882  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:53.220025  133241 cri.go:89] found id: ""
	I1210 01:11:53.220056  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.220066  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:53.220074  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:53.220133  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:53.254601  133241 cri.go:89] found id: ""
	I1210 01:11:53.254632  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.254641  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:53.254649  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:53.254718  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:53.286858  133241 cri.go:89] found id: ""
	I1210 01:11:53.286896  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.286906  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:53.286917  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:53.286979  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:53.322063  133241 cri.go:89] found id: ""
	I1210 01:11:53.322087  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.322096  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:53.322104  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:53.322175  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:53.353598  133241 cri.go:89] found id: ""
	I1210 01:11:53.353624  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.353632  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:53.353641  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:53.353653  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:53.400634  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:53.400660  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:53.412838  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:53.412870  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:53.475152  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:53.475176  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:53.475191  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:53.551193  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:53.551236  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:56.089703  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:56.102065  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:56.102158  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:56.137385  133241 cri.go:89] found id: ""
	I1210 01:11:56.137410  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.137418  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:56.137424  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:56.137489  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:56.173717  133241 cri.go:89] found id: ""
	I1210 01:11:56.173748  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.173756  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:56.173762  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:56.173823  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:56.209007  133241 cri.go:89] found id: ""
	I1210 01:11:56.209031  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.209038  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:56.209044  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:56.209106  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:56.247599  133241 cri.go:89] found id: ""
	I1210 01:11:56.247628  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.247636  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:56.247642  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:56.247701  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:56.279510  133241 cri.go:89] found id: ""
	I1210 01:11:56.279535  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.279544  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:56.279550  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:56.279600  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:56.311644  133241 cri.go:89] found id: ""
	I1210 01:11:56.311665  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.311672  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:56.311678  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:56.311722  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:56.343277  133241 cri.go:89] found id: ""
	I1210 01:11:56.343306  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.343317  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:56.343324  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:56.343384  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:56.396352  133241 cri.go:89] found id: ""
	I1210 01:11:56.396380  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.396388  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:56.396397  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:56.396409  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:56.408726  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:56.408754  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:56.483943  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:56.483970  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:56.483987  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:56.566841  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:56.566874  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:56.604048  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:56.604083  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:59.154979  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:59.167727  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:59.167803  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:59.198861  133241 cri.go:89] found id: ""
	I1210 01:11:59.198886  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.198894  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:59.198901  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:59.198953  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:59.232900  133241 cri.go:89] found id: ""
	I1210 01:11:59.232935  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.232947  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:59.232955  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:59.233024  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:59.267532  133241 cri.go:89] found id: ""
	I1210 01:11:59.267558  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.267566  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:59.267571  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:59.267633  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:59.298091  133241 cri.go:89] found id: ""
	I1210 01:11:59.298120  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.298130  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:59.298140  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:59.298199  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:59.327848  133241 cri.go:89] found id: ""
	I1210 01:11:59.327879  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.327889  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:59.327897  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:59.327957  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:59.356570  133241 cri.go:89] found id: ""
	I1210 01:11:59.356601  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.356617  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:59.356626  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:59.356686  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:59.387756  133241 cri.go:89] found id: ""
	I1210 01:11:59.387780  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.387788  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:59.387793  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:59.387843  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:59.419836  133241 cri.go:89] found id: ""
	I1210 01:11:59.419869  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.419878  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:59.419887  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:59.419902  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:59.469663  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:59.469697  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:59.482738  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:59.482768  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:59.548687  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:59.548717  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:59.548739  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:59.625772  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:59.625809  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:02.163527  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:02.175510  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:02.175569  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:02.209432  133241 cri.go:89] found id: ""
	I1210 01:12:02.209462  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.209474  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:02.209481  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:02.209535  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:02.241027  133241 cri.go:89] found id: ""
	I1210 01:12:02.241050  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.241059  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:02.241064  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:02.241113  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:02.272251  133241 cri.go:89] found id: ""
	I1210 01:12:02.272277  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.272286  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:02.272293  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:02.272355  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:02.305879  133241 cri.go:89] found id: ""
	I1210 01:12:02.305903  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.305913  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:02.305920  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:02.305978  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:02.339219  133241 cri.go:89] found id: ""
	I1210 01:12:02.339248  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.339263  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:02.339271  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:02.339333  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:02.375203  133241 cri.go:89] found id: ""
	I1210 01:12:02.375240  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.375252  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:02.375260  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:02.375326  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:02.406364  133241 cri.go:89] found id: ""
	I1210 01:12:02.406396  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.406406  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:02.406413  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:02.406472  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:02.441572  133241 cri.go:89] found id: ""
	I1210 01:12:02.441602  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.441614  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:02.441627  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:02.441642  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:02.454215  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:02.454241  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:02.526345  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:02.526368  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:02.526388  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:02.603813  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:02.603845  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:02.640102  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:02.640136  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:05.189319  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:05.201957  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:05.202022  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:05.242211  133241 cri.go:89] found id: ""
	I1210 01:12:05.242238  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.242247  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:05.242253  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:05.242300  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:05.277287  133241 cri.go:89] found id: ""
	I1210 01:12:05.277309  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.277317  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:05.277323  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:05.277382  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:05.309455  133241 cri.go:89] found id: ""
	I1210 01:12:05.309480  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.309488  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:05.309493  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:05.309540  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:05.344117  133241 cri.go:89] found id: ""
	I1210 01:12:05.344143  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.344156  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:05.344164  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:05.344222  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:05.375039  133241 cri.go:89] found id: ""
	I1210 01:12:05.375067  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.375079  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:05.375086  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:05.375146  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:05.407623  133241 cri.go:89] found id: ""
	I1210 01:12:05.407649  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.407657  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:05.407665  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:05.407723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:05.441018  133241 cri.go:89] found id: ""
	I1210 01:12:05.441047  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.441055  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:05.441061  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:05.441123  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:05.471864  133241 cri.go:89] found id: ""
	I1210 01:12:05.471895  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.471907  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:05.471918  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:05.471931  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:05.536855  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:05.536881  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:05.536896  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:05.617577  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:05.617612  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:05.654150  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:05.654188  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:05.707690  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:05.707720  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:08.220391  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:08.232904  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:08.232961  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:08.271892  133241 cri.go:89] found id: ""
	I1210 01:12:08.271921  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.271933  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:08.271939  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:08.272004  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:08.304534  133241 cri.go:89] found id: ""
	I1210 01:12:08.304556  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.304563  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:08.304569  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:08.304620  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:08.338410  133241 cri.go:89] found id: ""
	I1210 01:12:08.338441  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.338451  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:08.338459  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:08.338523  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:08.370412  133241 cri.go:89] found id: ""
	I1210 01:12:08.370438  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.370449  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:08.370456  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:08.370515  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:08.401137  133241 cri.go:89] found id: ""
	I1210 01:12:08.401161  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.401169  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:08.401175  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:08.401224  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:08.436185  133241 cri.go:89] found id: ""
	I1210 01:12:08.436220  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.436232  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:08.436241  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:08.436308  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:08.468648  133241 cri.go:89] found id: ""
	I1210 01:12:08.468677  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.468696  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:08.468704  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:08.468764  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:08.506817  133241 cri.go:89] found id: ""
	I1210 01:12:08.506852  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.506865  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:08.506878  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:08.506898  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:08.565209  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:08.565240  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:08.581630  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:08.581675  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:08.663163  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:08.663189  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:08.663201  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:08.744843  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:08.744888  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:11.282449  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:11.295381  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:11.295443  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:11.328119  133241 cri.go:89] found id: ""
	I1210 01:12:11.328145  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.328156  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:11.328162  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:11.328215  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:11.360864  133241 cri.go:89] found id: ""
	I1210 01:12:11.360895  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.360906  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:11.360914  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:11.360979  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:11.394838  133241 cri.go:89] found id: ""
	I1210 01:12:11.394862  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.394871  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:11.394876  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:11.394928  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:11.424174  133241 cri.go:89] found id: ""
	I1210 01:12:11.424216  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.424228  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:11.424236  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:11.424295  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:11.455057  133241 cri.go:89] found id: ""
	I1210 01:12:11.455083  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.455095  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:11.455102  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:11.455173  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:11.485755  133241 cri.go:89] found id: ""
	I1210 01:12:11.485783  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.485791  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:11.485797  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:11.485850  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:11.516921  133241 cri.go:89] found id: ""
	I1210 01:12:11.516952  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.516963  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:11.516970  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:11.517029  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:11.547484  133241 cri.go:89] found id: ""
	I1210 01:12:11.547510  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.547518  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:11.547527  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:11.547540  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:11.582392  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:11.582419  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:11.635271  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:11.635306  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:11.647460  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:11.647492  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:11.713562  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:11.713584  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:11.713599  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:14.299112  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:14.314813  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:14.314886  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:14.365870  133241 cri.go:89] found id: ""
	I1210 01:12:14.365907  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.365925  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:14.365934  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:14.365998  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:14.399023  133241 cri.go:89] found id: ""
	I1210 01:12:14.399046  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.399054  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:14.399060  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:14.399106  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:14.432464  133241 cri.go:89] found id: ""
	I1210 01:12:14.432490  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.432498  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:14.432504  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:14.432559  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:14.462625  133241 cri.go:89] found id: ""
	I1210 01:12:14.462657  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.462668  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:14.462675  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:14.462723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:14.494853  133241 cri.go:89] found id: ""
	I1210 01:12:14.494884  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.494895  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:14.494903  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:14.494968  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:14.528863  133241 cri.go:89] found id: ""
	I1210 01:12:14.528898  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.528909  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:14.528917  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:14.528985  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:14.563527  133241 cri.go:89] found id: ""
	I1210 01:12:14.563557  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.563568  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:14.563575  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:14.563633  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:14.592383  133241 cri.go:89] found id: ""
	I1210 01:12:14.592419  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.592429  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:14.592440  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:14.592453  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:14.604471  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:14.604498  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:14.671647  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:14.671673  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:14.671686  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:14.749619  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:14.749648  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:14.783668  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:14.783700  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:17.337203  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:17.349666  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:17.349726  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:17.380558  133241 cri.go:89] found id: ""
	I1210 01:12:17.380584  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.380595  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:17.380603  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:17.380663  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:17.413026  133241 cri.go:89] found id: ""
	I1210 01:12:17.413060  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.413072  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:17.413080  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:17.413149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:17.444972  133241 cri.go:89] found id: ""
	I1210 01:12:17.445003  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.445014  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:17.445022  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:17.445081  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:17.477555  133241 cri.go:89] found id: ""
	I1210 01:12:17.477580  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.477588  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:17.477594  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:17.477641  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:17.508550  133241 cri.go:89] found id: ""
	I1210 01:12:17.508574  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.508582  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:17.508588  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:17.508671  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:17.538537  133241 cri.go:89] found id: ""
	I1210 01:12:17.538574  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.538586  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:17.538594  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:17.538655  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:17.571816  133241 cri.go:89] found id: ""
	I1210 01:12:17.571843  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.571851  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:17.571859  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:17.571916  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:17.602437  133241 cri.go:89] found id: ""
	I1210 01:12:17.602465  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.602484  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:17.602502  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:17.602517  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:17.652904  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:17.652936  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:17.664983  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:17.665006  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:17.732580  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:17.732606  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:17.732622  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:17.813561  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:17.813598  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:20.349846  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:20.361680  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:20.361816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:20.394316  133241 cri.go:89] found id: ""
	I1210 01:12:20.394338  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.394345  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:20.394350  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:20.394395  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:20.432172  133241 cri.go:89] found id: ""
	I1210 01:12:20.432196  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.432204  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:20.432209  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:20.432256  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:20.464019  133241 cri.go:89] found id: ""
	I1210 01:12:20.464042  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.464049  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:20.464055  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:20.464101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:20.496239  133241 cri.go:89] found id: ""
	I1210 01:12:20.496264  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.496271  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:20.496277  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:20.496325  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:20.527890  133241 cri.go:89] found id: ""
	I1210 01:12:20.527920  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.527932  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:20.527939  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:20.527996  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:20.558333  133241 cri.go:89] found id: ""
	I1210 01:12:20.558360  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.558368  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:20.558374  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:20.558425  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:20.589431  133241 cri.go:89] found id: ""
	I1210 01:12:20.589461  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.589472  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:20.589480  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:20.589542  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:20.618988  133241 cri.go:89] found id: ""
	I1210 01:12:20.619018  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.619032  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:20.619042  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:20.619056  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:20.669620  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:20.669648  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:20.681405  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:20.681428  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:20.745196  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:20.745226  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:20.745243  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:20.823522  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:20.823548  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:23.360499  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:23.373249  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:23.373315  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:23.405186  133241 cri.go:89] found id: ""
	I1210 01:12:23.405207  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.405215  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:23.405224  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:23.405269  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:23.440082  133241 cri.go:89] found id: ""
	I1210 01:12:23.440118  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.440138  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:23.440146  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:23.440217  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:23.473962  133241 cri.go:89] found id: ""
	I1210 01:12:23.473991  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.474001  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:23.474010  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:23.474066  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:23.505004  133241 cri.go:89] found id: ""
	I1210 01:12:23.505028  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.505036  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:23.505042  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:23.505087  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:23.539383  133241 cri.go:89] found id: ""
	I1210 01:12:23.539416  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.539427  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:23.539435  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:23.539502  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:23.569371  133241 cri.go:89] found id: ""
	I1210 01:12:23.569402  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.569412  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:23.569420  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:23.569487  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:23.599718  133241 cri.go:89] found id: ""
	I1210 01:12:23.599740  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.599748  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:23.599754  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:23.599798  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:23.633483  133241 cri.go:89] found id: ""
	I1210 01:12:23.633513  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.633527  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:23.633538  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:23.633572  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:23.645791  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:23.645814  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:23.706819  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:23.706842  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:23.706858  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:23.792257  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:23.792283  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:23.832356  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:23.832384  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:26.383157  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:26.395778  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:26.395834  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:26.428709  133241 cri.go:89] found id: ""
	I1210 01:12:26.428738  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.428750  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:26.428758  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:26.428823  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:26.463421  133241 cri.go:89] found id: ""
	I1210 01:12:26.463451  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.463470  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:26.463479  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:26.463541  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:26.494783  133241 cri.go:89] found id: ""
	I1210 01:12:26.494813  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.494826  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:26.494834  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:26.494894  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:26.524395  133241 cri.go:89] found id: ""
	I1210 01:12:26.524423  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.524434  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:26.524442  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:26.524505  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:26.554102  133241 cri.go:89] found id: ""
	I1210 01:12:26.554135  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.554146  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:26.554153  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:26.554218  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:26.584091  133241 cri.go:89] found id: ""
	I1210 01:12:26.584119  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.584127  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:26.584133  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:26.584188  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:26.618194  133241 cri.go:89] found id: ""
	I1210 01:12:26.618221  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.618229  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:26.618234  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:26.618282  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:26.652597  133241 cri.go:89] found id: ""
	I1210 01:12:26.652632  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.652643  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:26.652657  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:26.652674  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:26.724236  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:26.724262  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:26.724277  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:26.802706  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:26.802745  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:26.851153  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:26.851184  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:26.902459  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:26.902489  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:29.415298  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:29.428093  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:29.428168  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:29.460651  133241 cri.go:89] found id: ""
	I1210 01:12:29.460678  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.460686  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:29.460692  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:29.460745  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:29.490971  133241 cri.go:89] found id: ""
	I1210 01:12:29.491000  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.491009  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:29.491015  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:29.491064  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:29.521465  133241 cri.go:89] found id: ""
	I1210 01:12:29.521496  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.521509  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:29.521517  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:29.521592  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:29.555709  133241 cri.go:89] found id: ""
	I1210 01:12:29.555736  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.555744  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:29.555750  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:29.555812  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:29.589891  133241 cri.go:89] found id: ""
	I1210 01:12:29.589918  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.589928  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:29.589935  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:29.590006  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:29.620929  133241 cri.go:89] found id: ""
	I1210 01:12:29.620959  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.620989  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:29.620998  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:29.621060  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:29.652297  133241 cri.go:89] found id: ""
	I1210 01:12:29.652322  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.652332  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:29.652339  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:29.652400  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:29.685881  133241 cri.go:89] found id: ""
	I1210 01:12:29.685904  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.685912  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:29.685922  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:29.685936  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:29.734856  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:29.734889  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:29.747270  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:29.747297  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:29.811253  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:29.811276  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:29.811292  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:29.888151  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:29.888187  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:32.425743  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:32.438647  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:32.438723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:32.477466  133241 cri.go:89] found id: ""
	I1210 01:12:32.477489  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.477498  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:32.477503  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:32.477553  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:32.509698  133241 cri.go:89] found id: ""
	I1210 01:12:32.509732  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.509746  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:32.509753  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:32.509811  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:32.540873  133241 cri.go:89] found id: ""
	I1210 01:12:32.540903  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.540911  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:32.540919  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:32.540981  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:32.571143  133241 cri.go:89] found id: ""
	I1210 01:12:32.571168  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.571179  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:32.571186  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:32.571253  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:32.604797  133241 cri.go:89] found id: ""
	I1210 01:12:32.604829  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.604839  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:32.604847  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:32.604902  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:32.640179  133241 cri.go:89] found id: ""
	I1210 01:12:32.640204  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.640212  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:32.640218  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:32.640265  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:32.671103  133241 cri.go:89] found id: ""
	I1210 01:12:32.671130  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.671138  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:32.671144  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:32.671195  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:32.709038  133241 cri.go:89] found id: ""
	I1210 01:12:32.709069  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.709080  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:32.709092  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:32.709107  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:32.764933  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:32.764963  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:32.777149  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:32.777172  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:32.842233  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:32.842256  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:32.842273  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:32.923533  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:32.923569  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:35.462284  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:35.476392  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:35.476465  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:35.509483  133241 cri.go:89] found id: ""
	I1210 01:12:35.509507  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.509515  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:35.509521  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:35.509568  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:35.546324  133241 cri.go:89] found id: ""
	I1210 01:12:35.546357  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.546369  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:35.546385  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:35.546457  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:35.580578  133241 cri.go:89] found id: ""
	I1210 01:12:35.580608  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.580618  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:35.580626  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:35.580695  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:35.613220  133241 cri.go:89] found id: ""
	I1210 01:12:35.613245  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.613253  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:35.613259  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:35.613318  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:35.650713  133241 cri.go:89] found id: ""
	I1210 01:12:35.650741  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.650751  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:35.650757  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:35.650826  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:35.685084  133241 cri.go:89] found id: ""
	I1210 01:12:35.685121  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.685134  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:35.685141  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:35.685196  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:35.717092  133241 cri.go:89] found id: ""
	I1210 01:12:35.717118  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.717130  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:35.717141  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:35.717197  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:35.753691  133241 cri.go:89] found id: ""
	I1210 01:12:35.753722  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.753732  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:35.753751  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:35.753766  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:35.807280  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:35.807314  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:35.821862  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:35.821894  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:35.892640  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:35.892667  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:35.892684  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:35.967250  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:35.967291  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:38.505643  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:38.518703  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:38.518762  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:38.554866  133241 cri.go:89] found id: ""
	I1210 01:12:38.554904  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.554917  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:38.554926  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:38.554983  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:38.586725  133241 cri.go:89] found id: ""
	I1210 01:12:38.586757  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.586770  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:38.586779  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:38.586840  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:38.617766  133241 cri.go:89] found id: ""
	I1210 01:12:38.617791  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.617799  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:38.617804  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:38.617855  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:38.647743  133241 cri.go:89] found id: ""
	I1210 01:12:38.647770  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.647779  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:38.647785  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:38.647838  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:38.680523  133241 cri.go:89] found id: ""
	I1210 01:12:38.680553  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.680564  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:38.680572  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:38.680634  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:38.714271  133241 cri.go:89] found id: ""
	I1210 01:12:38.714299  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.714307  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:38.714314  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:38.714366  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:38.751180  133241 cri.go:89] found id: ""
	I1210 01:12:38.751213  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.751226  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:38.751235  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:38.751307  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:38.783754  133241 cri.go:89] found id: ""
	I1210 01:12:38.783778  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.783787  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:38.783796  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:38.783807  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:38.843285  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:38.843332  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:38.856901  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:38.856935  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:38.923720  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:38.923747  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:38.923764  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:39.002855  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:39.002898  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:41.542152  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:41.556438  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:41.556517  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:41.587666  133241 cri.go:89] found id: ""
	I1210 01:12:41.587695  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.587706  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:41.587714  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:41.587772  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:41.620472  133241 cri.go:89] found id: ""
	I1210 01:12:41.620498  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.620506  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:41.620512  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:41.620568  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:41.653153  133241 cri.go:89] found id: ""
	I1210 01:12:41.653196  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.653209  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:41.653217  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:41.653275  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:41.685358  133241 cri.go:89] found id: ""
	I1210 01:12:41.685387  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.685395  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:41.685401  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:41.685459  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:41.715972  133241 cri.go:89] found id: ""
	I1210 01:12:41.715996  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.716004  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:41.716010  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:41.716058  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:41.750651  133241 cri.go:89] found id: ""
	I1210 01:12:41.750684  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.750695  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:41.750703  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:41.750781  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:41.788845  133241 cri.go:89] found id: ""
	I1210 01:12:41.788872  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.788882  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:41.788890  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:41.788953  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:41.821679  133241 cri.go:89] found id: ""
	I1210 01:12:41.821705  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.821716  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:41.821726  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:41.821741  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:41.873177  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:41.873207  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:41.885639  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:41.885663  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:41.954882  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:41.954906  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:41.954922  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:42.032868  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:42.032911  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:44.569896  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:44.582137  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:44.582239  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:44.613216  133241 cri.go:89] found id: ""
	I1210 01:12:44.613245  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.613255  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:44.613264  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:44.613326  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:44.642860  133241 cri.go:89] found id: ""
	I1210 01:12:44.642887  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.642897  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:44.642904  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:44.642961  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:44.675879  133241 cri.go:89] found id: ""
	I1210 01:12:44.675908  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.675920  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:44.675928  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:44.675992  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:44.705466  133241 cri.go:89] found id: ""
	I1210 01:12:44.705490  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.705499  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:44.705505  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:44.705552  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:44.740999  133241 cri.go:89] found id: ""
	I1210 01:12:44.741029  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.741038  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:44.741043  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:44.741101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:44.774933  133241 cri.go:89] found id: ""
	I1210 01:12:44.774963  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.774974  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:44.774981  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:44.775044  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:44.806061  133241 cri.go:89] found id: ""
	I1210 01:12:44.806085  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.806093  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:44.806100  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:44.806163  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:44.837759  133241 cri.go:89] found id: ""
	I1210 01:12:44.837781  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.837789  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:44.837797  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:44.837808  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:44.872830  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:44.872881  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:44.925476  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:44.925505  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:44.937814  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:44.937838  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:45.012002  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:45.012029  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:45.012046  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:47.589735  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:47.603668  133241 kubeadm.go:597] duration metric: took 4m3.306612686s to restartPrimaryControlPlane
	W1210 01:12:47.603739  133241 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 01:12:47.603761  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:12:48.154198  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:12:48.167608  133241 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:12:48.176803  133241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:12:48.185508  133241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:12:48.185527  133241 kubeadm.go:157] found existing configuration files:
	
	I1210 01:12:48.185572  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:12:48.193940  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:12:48.193992  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:12:48.202384  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:12:48.210626  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:12:48.210672  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:12:48.219377  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:12:48.227459  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:12:48.227493  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:12:48.235967  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:12:48.244142  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:12:48.244177  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:12:48.252961  133241 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:12:48.323011  133241 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 01:12:48.323104  133241 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:12:48.458259  133241 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:12:48.458424  133241 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:12:48.458536  133241 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 01:12:48.630626  133241 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:12:48.632393  133241 out.go:235]   - Generating certificates and keys ...
	I1210 01:12:48.632510  133241 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:12:48.632611  133241 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:12:48.633714  133241 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:12:48.633800  133241 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:12:48.633862  133241 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:12:48.633957  133241 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:12:48.634058  133241 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:12:48.634151  133241 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:12:48.634265  133241 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:12:48.634426  133241 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:12:48.634546  133241 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:12:48.634640  133241 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:12:48.756866  133241 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:12:48.885589  133241 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:12:49.551602  133241 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:12:49.667812  133241 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:12:49.683125  133241 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:12:49.684322  133241 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:12:49.684390  133241 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:12:49.830086  133241 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:12:49.831618  133241 out.go:235]   - Booting up control plane ...
	I1210 01:12:49.831733  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:12:49.836164  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:12:49.837117  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:12:49.845538  133241 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:12:49.848331  133241 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 01:13:29.849672  133241 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 01:13:29.850163  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:13:29.850412  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:13:34.850843  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:13:34.851064  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:13:44.851525  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:13:44.851699  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:14:04.852302  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:14:04.852558  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:14:44.854749  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:14:44.854980  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:14:44.854992  133241 kubeadm.go:310] 
	I1210 01:14:44.855044  133241 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 01:14:44.855104  133241 kubeadm.go:310] 		timed out waiting for the condition
	I1210 01:14:44.855115  133241 kubeadm.go:310] 
	I1210 01:14:44.855162  133241 kubeadm.go:310] 	This error is likely caused by:
	I1210 01:14:44.855217  133241 kubeadm.go:310] 		- The kubelet is not running
	I1210 01:14:44.855363  133241 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 01:14:44.855380  133241 kubeadm.go:310] 
	I1210 01:14:44.855514  133241 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 01:14:44.855565  133241 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 01:14:44.855615  133241 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 01:14:44.855625  133241 kubeadm.go:310] 
	I1210 01:14:44.855796  133241 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 01:14:44.855943  133241 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 01:14:44.855955  133241 kubeadm.go:310] 
	I1210 01:14:44.856139  133241 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 01:14:44.856299  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 01:14:44.856402  133241 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 01:14:44.856500  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 01:14:44.856525  133241 kubeadm.go:310] 
	I1210 01:14:44.856764  133241 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:14:44.856891  133241 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 01:14:44.856987  133241 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1210 01:14:44.857195  133241 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 01:14:44.857249  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:14:45.319104  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:14:45.333243  133241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:14:45.342637  133241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:14:45.342653  133241 kubeadm.go:157] found existing configuration files:
	
	I1210 01:14:45.342696  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:14:45.351179  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:14:45.351227  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:14:45.359836  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:14:45.368986  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:14:45.369041  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:14:45.378166  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:14:45.387734  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:14:45.387781  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:14:45.397866  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:14:45.406757  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:14:45.406794  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:14:45.416506  133241 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:14:45.484342  133241 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 01:14:45.484462  133241 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:14:45.624435  133241 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:14:45.624583  133241 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:14:45.624732  133241 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 01:14:45.800410  133241 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:14:45.802184  133241 out.go:235]   - Generating certificates and keys ...
	I1210 01:14:45.802296  133241 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:14:45.802393  133241 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:14:45.802504  133241 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:14:45.802601  133241 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:14:45.802707  133241 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:14:45.802780  133241 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:14:45.802867  133241 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:14:45.803320  133241 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:14:45.804003  133241 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:14:45.804623  133241 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:14:45.804904  133241 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:14:45.804997  133241 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:14:45.989500  133241 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:14:46.228462  133241 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:14:46.274395  133241 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:14:46.765291  133241 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:14:46.784318  133241 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:14:46.785620  133241 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:14:46.785694  133241 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:14:46.915963  133241 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:14:46.917607  133241 out.go:235]   - Booting up control plane ...
	I1210 01:14:46.917714  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:14:46.924564  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:14:46.925924  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:14:46.926912  133241 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:14:46.929973  133241 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 01:15:26.932207  133241 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 01:15:26.932539  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:15:26.932718  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:15:31.933200  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:15:31.933463  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:15:41.934297  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:15:41.934592  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:16:01.935227  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:16:01.935409  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:16:41.934005  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:16:41.934329  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:16:41.934361  133241 kubeadm.go:310] 
	I1210 01:16:41.934433  133241 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 01:16:41.934492  133241 kubeadm.go:310] 		timed out waiting for the condition
	I1210 01:16:41.934500  133241 kubeadm.go:310] 
	I1210 01:16:41.934550  133241 kubeadm.go:310] 	This error is likely caused by:
	I1210 01:16:41.934610  133241 kubeadm.go:310] 		- The kubelet is not running
	I1210 01:16:41.934768  133241 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 01:16:41.934791  133241 kubeadm.go:310] 
	I1210 01:16:41.934915  133241 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 01:16:41.934971  133241 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 01:16:41.935024  133241 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 01:16:41.935033  133241 kubeadm.go:310] 
	I1210 01:16:41.935184  133241 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 01:16:41.935327  133241 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 01:16:41.935346  133241 kubeadm.go:310] 
	I1210 01:16:41.935485  133241 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 01:16:41.935600  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 01:16:41.935720  133241 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 01:16:41.935818  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 01:16:41.935828  133241 kubeadm.go:310] 
	I1210 01:16:41.936518  133241 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:16:41.936630  133241 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 01:16:41.936756  133241 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1210 01:16:41.936849  133241 kubeadm.go:394] duration metric: took 7m57.690847315s to StartCluster
	I1210 01:16:41.936924  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:16:41.936994  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:16:41.979911  133241 cri.go:89] found id: ""
	I1210 01:16:41.979944  133241 logs.go:282] 0 containers: []
	W1210 01:16:41.979955  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:16:41.979964  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:16:41.980037  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:16:42.018336  133241 cri.go:89] found id: ""
	I1210 01:16:42.018366  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.018378  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:16:42.018385  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:16:42.018461  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:16:42.050036  133241 cri.go:89] found id: ""
	I1210 01:16:42.050065  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.050074  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:16:42.050080  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:16:42.050139  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:16:42.083023  133241 cri.go:89] found id: ""
	I1210 01:16:42.083051  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.083063  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:16:42.083072  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:16:42.083131  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:16:42.117900  133241 cri.go:89] found id: ""
	I1210 01:16:42.117921  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.117930  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:16:42.117936  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:16:42.117982  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:16:42.150009  133241 cri.go:89] found id: ""
	I1210 01:16:42.150041  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.150054  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:16:42.150063  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:16:42.150116  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:16:42.182606  133241 cri.go:89] found id: ""
	I1210 01:16:42.182632  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.182643  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:16:42.182650  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:16:42.182712  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:16:42.223456  133241 cri.go:89] found id: ""
	I1210 01:16:42.223486  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.223496  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:16:42.223507  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:16:42.223522  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:16:42.287081  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:16:42.287118  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:16:42.308277  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:16:42.308315  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:16:42.401928  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:16:42.401960  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:16:42.401977  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:16:42.515786  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:16:42.515829  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 01:16:42.551865  133241 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1210 01:16:42.551924  133241 out.go:270] * 
	* 
	W1210 01:16:42.552001  133241 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 01:16:42.552019  133241 out.go:270] * 
	* 
	W1210 01:16:42.552906  133241 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 01:16:42.556458  133241 out.go:201] 
	W1210 01:16:42.557556  133241 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 01:16:42.557619  133241 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 01:16:42.557649  133241 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 01:16:42.559020  133241 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-094470 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094470 -n old-k8s-version-094470
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094470 -n old-k8s-version-094470: exit status 2 (235.215813ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-094470 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-094470 logs -n 25: (1.415130195s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-094470                              | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-options-086522                                 | cert-options-086522          | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 00:58 UTC |
	| start   | -p no-preload-584179                                   | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 01:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-481624                           | kubernetes-upgrade-481624    | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 00:58 UTC |
	| start   | -p embed-certs-274758                                  | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 01:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-290541                              | cert-expiration-290541       | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-290541                              | cert-expiration-290541       | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-371895 | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | disable-driver-mounts-371895                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:02 UTC |
	|         | default-k8s-diff-port-901295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-584179             | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-584179                                   | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-274758            | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-274758                                  | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-901295  | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:02 UTC | 10 Dec 24 01:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:02 UTC |                     |
	|         | default-k8s-diff-port-901295                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-094470        | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-584179                  | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-274758                 | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-584179                                   | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC | 10 Dec 24 01:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-274758                                  | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC | 10 Dec 24 01:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-901295       | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-094470                              | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC | 10 Dec 24 01:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-094470             | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC | 10 Dec 24 01:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-094470                              | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC | 10 Dec 24 01:14 UTC |
	|         | default-k8s-diff-port-901295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/10 01:04:42
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 01:04:42.604554  133282 out.go:345] Setting OutFile to fd 1 ...
	I1210 01:04:42.604645  133282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:04:42.604652  133282 out.go:358] Setting ErrFile to fd 2...
	I1210 01:04:42.604657  133282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:04:42.604818  133282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 01:04:42.605325  133282 out.go:352] Setting JSON to false
	I1210 01:04:42.606230  133282 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10034,"bootTime":1733782649,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 01:04:42.606360  133282 start.go:139] virtualization: kvm guest
	I1210 01:04:42.608505  133282 out.go:177] * [default-k8s-diff-port-901295] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 01:04:42.609651  133282 notify.go:220] Checking for updates...
	I1210 01:04:42.609661  133282 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 01:04:42.610866  133282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 01:04:42.611986  133282 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:04:42.613055  133282 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 01:04:42.614094  133282 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 01:04:42.615160  133282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 01:04:42.616546  133282 config.go:182] Loaded profile config "default-k8s-diff-port-901295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:04:42.616942  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:04:42.617000  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:04:42.631861  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39355
	I1210 01:04:42.632399  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:04:42.632966  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:04:42.632988  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:04:42.633389  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:04:42.633558  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:04:42.633822  133282 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 01:04:42.634105  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:04:42.634139  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:04:42.648371  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38389
	I1210 01:04:42.648775  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:04:42.649217  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:04:42.649238  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:04:42.649580  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:04:42.649752  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:04:42.680926  133282 out.go:177] * Using the kvm2 driver based on existing profile
	I1210 01:04:42.682339  133282 start.go:297] selected driver: kvm2
	I1210 01:04:42.682365  133282 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:04:42.682487  133282 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 01:04:42.683148  133282 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 01:04:42.683220  133282 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 01:04:42.697586  133282 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1210 01:04:42.697938  133282 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:04:42.697970  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:04:42.698011  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:04:42.698042  133282 start.go:340] cluster config:
	{Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:04:42.698139  133282 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 01:04:42.699685  133282 out.go:177] * Starting "default-k8s-diff-port-901295" primary control-plane node in "default-k8s-diff-port-901295" cluster
	I1210 01:04:39.721352  133241 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1210 01:04:39.721383  133241 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1210 01:04:39.721392  133241 cache.go:56] Caching tarball of preloaded images
	I1210 01:04:39.721455  133241 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 01:04:39.721464  133241 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1210 01:04:39.721545  133241 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/config.json ...
	I1210 01:04:39.721707  133241 start.go:360] acquireMachinesLock for old-k8s-version-094470: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 01:04:44.574793  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:04:42.700760  133282 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:04:42.700790  133282 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1210 01:04:42.700799  133282 cache.go:56] Caching tarball of preloaded images
	I1210 01:04:42.700867  133282 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 01:04:42.700878  133282 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 01:04:42.700976  133282 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/config.json ...
	I1210 01:04:42.701136  133282 start.go:360] acquireMachinesLock for default-k8s-diff-port-901295: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 01:04:50.654849  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:04:53.726828  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:04:59.806818  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:02.878819  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:08.958855  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:12.030796  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:18.110838  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:21.182849  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:27.262801  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:30.334793  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:36.414830  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:39.486794  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:45.566825  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:48.639043  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:54.718789  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:57.790796  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:03.870824  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:06.942805  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:13.023037  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:16.094961  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:22.174798  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:25.246892  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:31.326818  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:34.398846  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:40.478809  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:43.550800  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:49.630777  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:52.702808  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:58.783007  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:01.854776  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:07.934835  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:11.006837  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:17.086805  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:20.158819  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:26.238836  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:29.311060  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:35.390827  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:38.462976  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:44.542806  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:47.614800  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:53.694819  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:56.766790  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:59.770632  132693 start.go:364] duration metric: took 4m32.843409632s to acquireMachinesLock for "embed-certs-274758"
	I1210 01:07:59.770698  132693 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:07:59.770705  132693 fix.go:54] fixHost starting: 
	I1210 01:07:59.771174  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:07:59.771226  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:07:59.787289  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45723
	I1210 01:07:59.787787  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:07:59.788234  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:07:59.788258  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:07:59.788645  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:07:59.788824  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:07:59.788958  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:07:59.790595  132693 fix.go:112] recreateIfNeeded on embed-certs-274758: state=Stopped err=<nil>
	I1210 01:07:59.790631  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	W1210 01:07:59.790790  132693 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:07:59.792515  132693 out.go:177] * Restarting existing kvm2 VM for "embed-certs-274758" ...
	I1210 01:07:59.793607  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Start
	I1210 01:07:59.793771  132693 main.go:141] libmachine: (embed-certs-274758) Ensuring networks are active...
	I1210 01:07:59.794532  132693 main.go:141] libmachine: (embed-certs-274758) Ensuring network default is active
	I1210 01:07:59.794864  132693 main.go:141] libmachine: (embed-certs-274758) Ensuring network mk-embed-certs-274758 is active
	I1210 01:07:59.795317  132693 main.go:141] libmachine: (embed-certs-274758) Getting domain xml...
	I1210 01:07:59.796099  132693 main.go:141] libmachine: (embed-certs-274758) Creating domain...
	I1210 01:08:00.982632  132693 main.go:141] libmachine: (embed-certs-274758) Waiting to get IP...
	I1210 01:08:00.983591  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:00.984037  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:00.984077  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:00.984002  133990 retry.go:31] will retry after 285.753383ms: waiting for machine to come up
	I1210 01:08:01.272035  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:01.272490  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:01.272514  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:01.272423  133990 retry.go:31] will retry after 309.245833ms: waiting for machine to come up
	I1210 01:08:01.582873  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:01.583336  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:01.583382  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:01.583288  133990 retry.go:31] will retry after 451.016986ms: waiting for machine to come up
	I1210 01:07:59.768336  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:07:59.768370  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:07:59.768666  132605 buildroot.go:166] provisioning hostname "no-preload-584179"
	I1210 01:07:59.768702  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:07:59.768894  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:07:59.770491  132605 machine.go:96] duration metric: took 4m37.429107505s to provisionDockerMachine
	I1210 01:07:59.770535  132605 fix.go:56] duration metric: took 4m37.448303416s for fixHost
	I1210 01:07:59.770542  132605 start.go:83] releasing machines lock for "no-preload-584179", held for 4m37.448340626s
	W1210 01:07:59.770589  132605 start.go:714] error starting host: provision: host is not running
	W1210 01:07:59.770743  132605 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1210 01:07:59.770759  132605 start.go:729] Will try again in 5 seconds ...
	I1210 01:08:02.035970  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:02.036421  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:02.036443  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:02.036382  133990 retry.go:31] will retry after 408.436756ms: waiting for machine to come up
	I1210 01:08:02.445970  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:02.446515  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:02.446550  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:02.446445  133990 retry.go:31] will retry after 612.819219ms: waiting for machine to come up
	I1210 01:08:03.061377  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:03.061850  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:03.061879  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:03.061795  133990 retry.go:31] will retry after 867.345457ms: waiting for machine to come up
	I1210 01:08:03.930866  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:03.931316  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:03.931340  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:03.931259  133990 retry.go:31] will retry after 758.429736ms: waiting for machine to come up
	I1210 01:08:04.691061  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:04.691480  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:04.691511  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:04.691430  133990 retry.go:31] will retry after 1.278419765s: waiting for machine to come up
	I1210 01:08:05.972206  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:05.972645  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:05.972677  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:05.972596  133990 retry.go:31] will retry after 1.726404508s: waiting for machine to come up
	I1210 01:08:04.770968  132605 start.go:360] acquireMachinesLock for no-preload-584179: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 01:08:07.700170  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:07.700593  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:07.700615  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:07.700544  133990 retry.go:31] will retry after 2.286681333s: waiting for machine to come up
	I1210 01:08:09.989072  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:09.989424  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:09.989447  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:09.989383  133990 retry.go:31] will retry after 2.723565477s: waiting for machine to come up
	I1210 01:08:12.716204  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:12.716656  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:12.716680  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:12.716618  133990 retry.go:31] will retry after 3.619683155s: waiting for machine to come up
	I1210 01:08:16.338854  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.339271  132693 main.go:141] libmachine: (embed-certs-274758) Found IP for machine: 192.168.72.76
	I1210 01:08:16.339301  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has current primary IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.339306  132693 main.go:141] libmachine: (embed-certs-274758) Reserving static IP address...
	I1210 01:08:16.339656  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "embed-certs-274758", mac: "52:54:00:d3:3c:b1", ip: "192.168.72.76"} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.339683  132693 main.go:141] libmachine: (embed-certs-274758) DBG | skip adding static IP to network mk-embed-certs-274758 - found existing host DHCP lease matching {name: "embed-certs-274758", mac: "52:54:00:d3:3c:b1", ip: "192.168.72.76"}
	I1210 01:08:16.339695  132693 main.go:141] libmachine: (embed-certs-274758) Reserved static IP address: 192.168.72.76
	I1210 01:08:16.339703  132693 main.go:141] libmachine: (embed-certs-274758) Waiting for SSH to be available...
	I1210 01:08:16.339715  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Getting to WaitForSSH function...
	I1210 01:08:16.341531  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.341776  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.341804  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.341963  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Using SSH client type: external
	I1210 01:08:16.341995  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa (-rw-------)
	I1210 01:08:16.342030  132693 main.go:141] libmachine: (embed-certs-274758) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.76 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:08:16.342047  132693 main.go:141] libmachine: (embed-certs-274758) DBG | About to run SSH command:
	I1210 01:08:16.342061  132693 main.go:141] libmachine: (embed-certs-274758) DBG | exit 0
	I1210 01:08:16.465930  132693 main.go:141] libmachine: (embed-certs-274758) DBG | SSH cmd err, output: <nil>: 
	I1210 01:08:16.466310  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetConfigRaw
	I1210 01:08:16.466921  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:16.469152  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.469472  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.469501  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.469754  132693 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/config.json ...
	I1210 01:08:16.469962  132693 machine.go:93] provisionDockerMachine start ...
	I1210 01:08:16.469982  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:16.470197  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.472368  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.472740  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.472765  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.472888  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.473052  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.473222  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.473325  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.473500  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:16.473737  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:16.473752  132693 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:08:16.581932  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:08:16.581963  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetMachineName
	I1210 01:08:16.582183  132693 buildroot.go:166] provisioning hostname "embed-certs-274758"
	I1210 01:08:16.582213  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetMachineName
	I1210 01:08:16.582412  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.584799  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.585092  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.585124  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.585264  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.585415  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.585568  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.585701  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.585836  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:16.586010  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:16.586026  132693 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-274758 && echo "embed-certs-274758" | sudo tee /etc/hostname
	I1210 01:08:16.707226  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-274758
	
	I1210 01:08:16.707260  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.709905  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.710192  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.710223  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.710428  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.710632  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.710828  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.710957  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.711127  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:16.711339  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:16.711356  132693 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-274758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-274758/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-274758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:08:17.578801  133241 start.go:364] duration metric: took 3m37.857041189s to acquireMachinesLock for "old-k8s-version-094470"
	I1210 01:08:17.578868  133241 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:08:17.578876  133241 fix.go:54] fixHost starting: 
	I1210 01:08:17.579295  133241 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:08:17.579353  133241 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:08:17.595770  133241 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46811
	I1210 01:08:17.596141  133241 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:08:17.596669  133241 main.go:141] libmachine: Using API Version  1
	I1210 01:08:17.596693  133241 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:08:17.597084  133241 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:08:17.597263  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:17.597405  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetState
	I1210 01:08:17.598931  133241 fix.go:112] recreateIfNeeded on old-k8s-version-094470: state=Stopped err=<nil>
	I1210 01:08:17.598957  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	W1210 01:08:17.599124  133241 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:08:17.600962  133241 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-094470" ...
	I1210 01:08:16.831001  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:08:16.831032  132693 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:08:16.831063  132693 buildroot.go:174] setting up certificates
	I1210 01:08:16.831074  132693 provision.go:84] configureAuth start
	I1210 01:08:16.831084  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetMachineName
	I1210 01:08:16.831362  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:16.833916  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.834282  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.834318  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.834446  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.836770  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.837061  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.837083  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.837216  132693 provision.go:143] copyHostCerts
	I1210 01:08:16.837284  132693 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:08:16.837303  132693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:08:16.837357  132693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:08:16.837447  132693 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:08:16.837455  132693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:08:16.837478  132693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:08:16.837528  132693 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:08:16.837535  132693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:08:16.837554  132693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:08:16.837609  132693 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.embed-certs-274758 san=[127.0.0.1 192.168.72.76 embed-certs-274758 localhost minikube]
	I1210 01:08:16.953590  132693 provision.go:177] copyRemoteCerts
	I1210 01:08:16.953649  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:08:16.953676  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.956012  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.956347  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.956384  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.956544  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.956703  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.956828  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.956951  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.039674  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:08:17.061125  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1210 01:08:17.082062  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 01:08:17.102519  132693 provision.go:87] duration metric: took 271.416512ms to configureAuth
	I1210 01:08:17.102554  132693 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:08:17.102745  132693 config.go:182] Loaded profile config "embed-certs-274758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:08:17.102858  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.105469  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.105818  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.105849  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.105976  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.106169  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.106326  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.106468  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.106639  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:17.106804  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:17.106817  132693 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:08:17.339841  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:08:17.339873  132693 machine.go:96] duration metric: took 869.895063ms to provisionDockerMachine
	I1210 01:08:17.339888  132693 start.go:293] postStartSetup for "embed-certs-274758" (driver="kvm2")
	I1210 01:08:17.339902  132693 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:08:17.339921  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.340256  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:08:17.340295  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.342633  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.342947  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.342973  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.343127  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.343294  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.343441  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.343545  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.428245  132693 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:08:17.432486  132693 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:08:17.432507  132693 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:08:17.432568  132693 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:08:17.432650  132693 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:08:17.432756  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:08:17.441892  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:17.464515  132693 start.go:296] duration metric: took 124.610801ms for postStartSetup
	I1210 01:08:17.464558  132693 fix.go:56] duration metric: took 17.693851707s for fixHost
	I1210 01:08:17.464592  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.467173  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.467470  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.467494  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.467622  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.467829  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.467976  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.468111  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.468253  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:17.468418  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:17.468429  132693 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:08:17.578630  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792897.551711245
	
	I1210 01:08:17.578653  132693 fix.go:216] guest clock: 1733792897.551711245
	I1210 01:08:17.578662  132693 fix.go:229] Guest: 2024-12-10 01:08:17.551711245 +0000 UTC Remote: 2024-12-10 01:08:17.464575547 +0000 UTC m=+290.672639525 (delta=87.135698ms)
	I1210 01:08:17.578690  132693 fix.go:200] guest clock delta is within tolerance: 87.135698ms
	I1210 01:08:17.578697  132693 start.go:83] releasing machines lock for "embed-certs-274758", held for 17.808018239s
	I1210 01:08:17.578727  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.578978  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:17.581740  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.582079  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.582105  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.582272  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.582792  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.582970  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.583053  132693 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:08:17.583108  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.583173  132693 ssh_runner.go:195] Run: cat /version.json
	I1210 01:08:17.583203  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.585727  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586056  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586096  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.586121  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586268  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.586447  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.586496  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.586525  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586661  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.586665  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.586853  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.586851  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.587016  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.587145  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.689525  132693 ssh_runner.go:195] Run: systemctl --version
	I1210 01:08:17.696586  132693 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:08:17.838483  132693 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:08:17.844291  132693 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:08:17.844381  132693 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:08:17.858838  132693 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:08:17.858864  132693 start.go:495] detecting cgroup driver to use...
	I1210 01:08:17.858926  132693 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:08:17.875144  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:08:17.887694  132693 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:08:17.887750  132693 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:08:17.900263  132693 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:08:17.916462  132693 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:08:18.050837  132693 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:08:18.237065  132693 docker.go:233] disabling docker service ...
	I1210 01:08:18.237134  132693 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:08:18.254596  132693 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:08:18.267028  132693 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:08:18.384379  132693 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:08:18.511930  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:08:18.525729  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:08:18.544642  132693 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 01:08:18.544693  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.555569  132693 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:08:18.555629  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.565952  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.575954  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.589571  132693 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:08:18.604400  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.615079  132693 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.631811  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.641877  132693 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:08:18.651229  132693 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:08:18.651284  132693 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:08:18.663922  132693 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:08:18.673755  132693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:18.804115  132693 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:08:18.902371  132693 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:08:18.902453  132693 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:08:18.906806  132693 start.go:563] Will wait 60s for crictl version
	I1210 01:08:18.906876  132693 ssh_runner.go:195] Run: which crictl
	I1210 01:08:18.910409  132693 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:08:18.957196  132693 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:08:18.957293  132693 ssh_runner.go:195] Run: crio --version
	I1210 01:08:18.983326  132693 ssh_runner.go:195] Run: crio --version
	I1210 01:08:19.021374  132693 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 01:08:17.602512  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .Start
	I1210 01:08:17.602729  133241 main.go:141] libmachine: (old-k8s-version-094470) Ensuring networks are active...
	I1210 01:08:17.603418  133241 main.go:141] libmachine: (old-k8s-version-094470) Ensuring network default is active
	I1210 01:08:17.603788  133241 main.go:141] libmachine: (old-k8s-version-094470) Ensuring network mk-old-k8s-version-094470 is active
	I1210 01:08:17.604284  133241 main.go:141] libmachine: (old-k8s-version-094470) Getting domain xml...
	I1210 01:08:17.605020  133241 main.go:141] libmachine: (old-k8s-version-094470) Creating domain...
	I1210 01:08:18.869767  133241 main.go:141] libmachine: (old-k8s-version-094470) Waiting to get IP...
	I1210 01:08:18.870786  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:18.871226  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:18.871282  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:18.871190  134112 retry.go:31] will retry after 260.195661ms: waiting for machine to come up
	I1210 01:08:19.132624  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:19.133091  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:19.133113  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:19.133034  134112 retry.go:31] will retry after 241.852579ms: waiting for machine to come up
	I1210 01:08:19.376814  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:19.377485  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:19.377520  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:19.377420  134112 retry.go:31] will retry after 410.574957ms: waiting for machine to come up
	I1210 01:08:19.023096  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:19.026231  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:19.026697  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:19.026740  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:19.026981  132693 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1210 01:08:19.031042  132693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:19.043510  132693 kubeadm.go:883] updating cluster {Name:embed-certs-274758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-274758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:08:19.043679  132693 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:08:19.043747  132693 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:19.075804  132693 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 01:08:19.075875  132693 ssh_runner.go:195] Run: which lz4
	I1210 01:08:19.079498  132693 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 01:08:19.083365  132693 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 01:08:19.083394  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1210 01:08:20.282126  132693 crio.go:462] duration metric: took 1.202670831s to copy over tarball
	I1210 01:08:20.282224  132693 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 01:08:19.790282  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:19.790868  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:19.790898  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:19.790828  134112 retry.go:31] will retry after 535.183165ms: waiting for machine to come up
	I1210 01:08:20.327434  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:20.327936  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:20.327972  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:20.327862  134112 retry.go:31] will retry after 729.193633ms: waiting for machine to come up
	I1210 01:08:21.058815  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:21.059274  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:21.059302  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:21.059224  134112 retry.go:31] will retry after 578.788415ms: waiting for machine to come up
	I1210 01:08:21.640036  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:21.640572  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:21.640604  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:21.640523  134112 retry.go:31] will retry after 1.113559472s: waiting for machine to come up
	I1210 01:08:22.755259  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:22.755716  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:22.755741  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:22.755681  134112 retry.go:31] will retry after 940.416935ms: waiting for machine to come up
	I1210 01:08:23.698216  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:23.698652  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:23.698684  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:23.698608  134112 retry.go:31] will retry after 1.575038679s: waiting for machine to come up
	I1210 01:08:22.359701  132693 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.077440918s)
	I1210 01:08:22.359757  132693 crio.go:469] duration metric: took 2.077602088s to extract the tarball
	I1210 01:08:22.359770  132693 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 01:08:22.404915  132693 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:22.444497  132693 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 01:08:22.444531  132693 cache_images.go:84] Images are preloaded, skipping loading
	I1210 01:08:22.444543  132693 kubeadm.go:934] updating node { 192.168.72.76 8443 v1.31.2 crio true true} ...
	I1210 01:08:22.444702  132693 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-274758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-274758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:08:22.444801  132693 ssh_runner.go:195] Run: crio config
	I1210 01:08:22.484278  132693 cni.go:84] Creating CNI manager for ""
	I1210 01:08:22.484301  132693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:08:22.484311  132693 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:08:22.484345  132693 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.76 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-274758 NodeName:embed-certs-274758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.76"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.76 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 01:08:22.484508  132693 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.76
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-274758"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.76"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.76"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:08:22.484573  132693 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 01:08:22.493746  132693 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:08:22.493827  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:08:22.503898  132693 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1210 01:08:22.520349  132693 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:08:22.536653  132693 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1210 01:08:22.553389  132693 ssh_runner.go:195] Run: grep 192.168.72.76	control-plane.minikube.internal$ /etc/hosts
	I1210 01:08:22.556933  132693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.76	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:22.569060  132693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:22.709124  132693 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:08:22.728316  132693 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758 for IP: 192.168.72.76
	I1210 01:08:22.728342  132693 certs.go:194] generating shared ca certs ...
	I1210 01:08:22.728382  132693 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:08:22.728564  132693 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:08:22.728619  132693 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:08:22.728633  132693 certs.go:256] generating profile certs ...
	I1210 01:08:22.728764  132693 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/client.key
	I1210 01:08:22.728852  132693 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/apiserver.key.ec69c041
	I1210 01:08:22.728906  132693 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/proxy-client.key
	I1210 01:08:22.729067  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:08:22.729121  132693 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:08:22.729144  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:08:22.729186  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:08:22.729223  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:08:22.729254  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:08:22.729313  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:22.730259  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:08:22.786992  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:08:22.813486  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:08:22.840236  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:08:22.870078  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1210 01:08:22.896484  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 01:08:22.917547  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:08:22.940550  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 01:08:22.964784  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:08:22.987389  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:08:23.009860  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:08:23.032300  132693 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:08:23.048611  132693 ssh_runner.go:195] Run: openssl version
	I1210 01:08:23.053927  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:08:23.064731  132693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:08:23.068872  132693 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:08:23.068917  132693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:08:23.074207  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:08:23.085278  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:08:23.096087  132693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:23.100106  132693 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:23.100155  132693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:23.105408  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:08:23.114862  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:08:23.124112  132693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:08:23.127915  132693 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:08:23.127958  132693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:08:23.132972  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:08:23.142672  132693 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:08:23.146554  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:08:23.152071  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:08:23.157606  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:08:23.162974  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:08:23.168059  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:08:23.173354  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:08:23.178612  132693 kubeadm.go:392] StartCluster: {Name:embed-certs-274758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-274758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:08:23.178733  132693 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:08:23.178788  132693 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:23.214478  132693 cri.go:89] found id: ""
	I1210 01:08:23.214545  132693 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:08:23.223871  132693 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:08:23.223897  132693 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:08:23.223956  132693 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:08:23.232839  132693 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:08:23.233836  132693 kubeconfig.go:125] found "embed-certs-274758" server: "https://192.168.72.76:8443"
	I1210 01:08:23.235958  132693 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:08:23.244484  132693 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.76
	I1210 01:08:23.244517  132693 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:08:23.244529  132693 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:08:23.244578  132693 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:23.282997  132693 cri.go:89] found id: ""
	I1210 01:08:23.283063  132693 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:08:23.298971  132693 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:08:23.307664  132693 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:08:23.307690  132693 kubeadm.go:157] found existing configuration files:
	
	I1210 01:08:23.307739  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:08:23.316208  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:08:23.316259  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:08:23.324410  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:08:23.332254  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:08:23.332303  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:08:23.340482  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:08:23.348584  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:08:23.348636  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:08:23.356760  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:08:23.364508  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:08:23.364564  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:08:23.372644  132693 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:08:23.380791  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:23.481384  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.558104  132693 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.076675674s)
	I1210 01:08:24.558155  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.743002  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.812833  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.910903  132693 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:08:24.911007  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:25.411815  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:25.911457  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:26.411340  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:25.276751  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:25.277027  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:25.277058  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:25.276996  134112 retry.go:31] will retry after 1.531276871s: waiting for machine to come up
	I1210 01:08:26.809860  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:26.810332  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:26.810365  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:26.810270  134112 retry.go:31] will retry after 2.029725217s: waiting for machine to come up
	I1210 01:08:28.842419  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:28.842945  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:28.842979  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:28.842895  134112 retry.go:31] will retry after 2.777752063s: waiting for machine to come up
	I1210 01:08:26.911681  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:26.925244  132693 api_server.go:72] duration metric: took 2.014341005s to wait for apiserver process to appear ...
	I1210 01:08:26.925276  132693 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:08:26.925307  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:29.461167  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:08:29.461199  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:08:29.461221  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:29.490907  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:08:29.490935  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:08:29.925947  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:29.938161  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:08:29.938197  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:08:30.425822  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:30.448700  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:08:30.448741  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:08:30.926368  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:30.930770  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 200:
	ok
	I1210 01:08:30.936664  132693 api_server.go:141] control plane version: v1.31.2
	I1210 01:08:30.936706  132693 api_server.go:131] duration metric: took 4.011421056s to wait for apiserver health ...
	I1210 01:08:30.936719  132693 cni.go:84] Creating CNI manager for ""
	I1210 01:08:30.936731  132693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:08:30.938509  132693 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:08:30.939651  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:08:30.949390  132693 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:08:30.973739  132693 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:08:30.988397  132693 system_pods.go:59] 8 kube-system pods found
	I1210 01:08:30.988441  132693 system_pods.go:61] "coredns-7c65d6cfc9-g98k2" [4358eb5a-fa28-405d-b6a4-66d232c1b060] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 01:08:30.988451  132693 system_pods.go:61] "etcd-embed-certs-274758" [11343776-d268-428f-9af8-4d20e4c1dda4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 01:08:30.988461  132693 system_pods.go:61] "kube-apiserver-embed-certs-274758" [c60d7a8e-e029-47ec-8f9d-5531aaeeb595] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 01:08:30.988471  132693 system_pods.go:61] "kube-controller-manager-embed-certs-274758" [53c0e257-c3c1-410b-8ce5-8350530160c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 01:08:30.988478  132693 system_pods.go:61] "kube-proxy-d29zg" [cbf2dba9-1c85-4e21-bf0b-01cf3fcd00df] Running
	I1210 01:08:30.988503  132693 system_pods.go:61] "kube-scheduler-embed-certs-274758" [6ecaa7c9-f7b6-450d-941c-8ccf582af275] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 01:08:30.988516  132693 system_pods.go:61] "metrics-server-6867b74b74-mhxtf" [2874a85a-c957-4056-b60e-be170f3c1ab2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:08:30.988527  132693 system_pods.go:61] "storage-provisioner" [7e2b93e2-0f25-4bb1-bca6-02a8ea5336ed] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 01:08:30.988539  132693 system_pods.go:74] duration metric: took 14.779044ms to wait for pod list to return data ...
	I1210 01:08:30.988567  132693 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:08:30.993600  132693 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:08:30.993632  132693 node_conditions.go:123] node cpu capacity is 2
	I1210 01:08:30.993652  132693 node_conditions.go:105] duration metric: took 5.074866ms to run NodePressure ...
	I1210 01:08:30.993680  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:31.251140  132693 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 01:08:31.254339  132693 kubeadm.go:739] kubelet initialised
	I1210 01:08:31.254358  132693 kubeadm.go:740] duration metric: took 3.193934ms waiting for restarted kubelet to initialise ...
	I1210 01:08:31.254367  132693 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:08:31.259628  132693 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.264379  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.264406  132693 pod_ready.go:82] duration metric: took 4.746678ms for pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.264417  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.264434  132693 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.268773  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "etcd-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.268794  132693 pod_ready.go:82] duration metric: took 4.345772ms for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.268804  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "etcd-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.268812  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.272890  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.272911  132693 pod_ready.go:82] duration metric: took 4.087379ms for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.272921  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.272929  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.377990  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.378020  132693 pod_ready.go:82] duration metric: took 105.077792ms for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.378033  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.378041  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-d29zg" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.777563  132693 pod_ready.go:93] pod "kube-proxy-d29zg" in "kube-system" namespace has status "Ready":"True"
	I1210 01:08:31.777584  132693 pod_ready.go:82] duration metric: took 399.533068ms for pod "kube-proxy-d29zg" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.777598  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.623742  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:31.624253  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:31.624289  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:31.624189  134112 retry.go:31] will retry after 3.852910592s: waiting for machine to come up
	I1210 01:08:36.766538  133282 start.go:364] duration metric: took 3m54.06534367s to acquireMachinesLock for "default-k8s-diff-port-901295"
	I1210 01:08:36.766623  133282 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:08:36.766636  133282 fix.go:54] fixHost starting: 
	I1210 01:08:36.767069  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:08:36.767139  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:08:36.785475  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41443
	I1210 01:08:36.786023  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:08:36.786614  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:08:36.786640  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:08:36.786956  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:08:36.787147  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:36.787295  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:08:36.788719  133282 fix.go:112] recreateIfNeeded on default-k8s-diff-port-901295: state=Stopped err=<nil>
	I1210 01:08:36.788745  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	W1210 01:08:36.788889  133282 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:08:36.791479  133282 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-901295" ...
	I1210 01:08:33.784092  132693 pod_ready.go:103] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:35.784732  132693 pod_ready.go:103] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:36.792712  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Start
	I1210 01:08:36.792883  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Ensuring networks are active...
	I1210 01:08:36.793559  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Ensuring network default is active
	I1210 01:08:36.793891  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Ensuring network mk-default-k8s-diff-port-901295 is active
	I1210 01:08:36.794354  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Getting domain xml...
	I1210 01:08:36.795038  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Creating domain...
	I1210 01:08:35.480373  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.480901  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has current primary IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.480926  133241 main.go:141] libmachine: (old-k8s-version-094470) Found IP for machine: 192.168.61.11
	I1210 01:08:35.480955  133241 main.go:141] libmachine: (old-k8s-version-094470) Reserving static IP address...
	I1210 01:08:35.481323  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "old-k8s-version-094470", mac: "52:54:00:00:f3:52", ip: "192.168.61.11"} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.481352  133241 main.go:141] libmachine: (old-k8s-version-094470) Reserved static IP address: 192.168.61.11
	I1210 01:08:35.481370  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | skip adding static IP to network mk-old-k8s-version-094470 - found existing host DHCP lease matching {name: "old-k8s-version-094470", mac: "52:54:00:00:f3:52", ip: "192.168.61.11"}
	I1210 01:08:35.481392  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | Getting to WaitForSSH function...
	I1210 01:08:35.481408  133241 main.go:141] libmachine: (old-k8s-version-094470) Waiting for SSH to be available...
	I1210 01:08:35.483785  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.484269  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.484314  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.484458  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | Using SSH client type: external
	I1210 01:08:35.484493  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa (-rw-------)
	I1210 01:08:35.484526  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:08:35.484548  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | About to run SSH command:
	I1210 01:08:35.484557  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | exit 0
	I1210 01:08:35.610216  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | SSH cmd err, output: <nil>: 
	I1210 01:08:35.610554  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetConfigRaw
	I1210 01:08:35.611179  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:35.613811  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.614184  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.614221  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.614448  133241 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/config.json ...
	I1210 01:08:35.614659  133241 machine.go:93] provisionDockerMachine start ...
	I1210 01:08:35.614681  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:35.614861  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.616965  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.617478  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.617507  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.617606  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:35.617741  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.617880  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.617993  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:35.618166  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:35.618416  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:35.618431  133241 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:08:35.730293  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:08:35.730326  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 01:08:35.730614  133241 buildroot.go:166] provisioning hostname "old-k8s-version-094470"
	I1210 01:08:35.730647  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 01:08:35.730902  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.733604  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.733943  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.733963  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.734110  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:35.734290  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.734436  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.734589  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:35.734737  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:35.734921  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:35.734937  133241 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-094470 && echo "old-k8s-version-094470" | sudo tee /etc/hostname
	I1210 01:08:35.856219  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-094470
	
	I1210 01:08:35.856272  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.859777  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.860157  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.860194  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.860364  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:35.860590  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.860808  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.860948  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:35.861145  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:35.861370  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:35.861391  133241 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-094470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-094470/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-094470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:08:35.984487  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:08:35.984523  133241 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:08:35.984571  133241 buildroot.go:174] setting up certificates
	I1210 01:08:35.984585  133241 provision.go:84] configureAuth start
	I1210 01:08:35.984596  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 01:08:35.984888  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:35.987515  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.987891  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.987920  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.988078  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.990428  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.990806  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.990838  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.991028  133241 provision.go:143] copyHostCerts
	I1210 01:08:35.991108  133241 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:08:35.991125  133241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:08:35.991208  133241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:08:35.991378  133241 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:08:35.991396  133241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:08:35.991436  133241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:08:35.991548  133241 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:08:35.991560  133241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:08:35.991593  133241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:08:35.991684  133241 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-094470 san=[127.0.0.1 192.168.61.11 localhost minikube old-k8s-version-094470]
	I1210 01:08:36.166767  133241 provision.go:177] copyRemoteCerts
	I1210 01:08:36.166825  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:08:36.166872  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.169777  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.170166  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.170196  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.170452  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.170662  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.170837  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.170985  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.255600  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:08:36.277974  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1210 01:08:36.299608  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 01:08:36.320325  133241 provision.go:87] duration metric: took 335.730286ms to configureAuth
	I1210 01:08:36.320346  133241 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:08:36.320502  133241 config.go:182] Loaded profile config "old-k8s-version-094470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1210 01:08:36.320572  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.323358  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.323810  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.323836  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.324012  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.324213  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.324351  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.324479  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.324608  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:36.324773  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:36.324789  133241 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:08:36.538020  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:08:36.538052  133241 machine.go:96] duration metric: took 923.37742ms to provisionDockerMachine
	I1210 01:08:36.538065  133241 start.go:293] postStartSetup for "old-k8s-version-094470" (driver="kvm2")
	I1210 01:08:36.538075  133241 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:08:36.538092  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.538437  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:08:36.538473  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.540836  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.541187  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.541229  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.541400  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.541594  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.541728  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.541852  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.623740  133241 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:08:36.627323  133241 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:08:36.627343  133241 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:08:36.627405  133241 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:08:36.627487  133241 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:08:36.627568  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:08:36.635720  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:36.656793  133241 start.go:296] duration metric: took 118.715633ms for postStartSetup
	I1210 01:08:36.656832  133241 fix.go:56] duration metric: took 19.077955657s for fixHost
	I1210 01:08:36.656853  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.659288  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.659586  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.659618  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.659772  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.659961  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.660132  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.660250  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.660391  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:36.660552  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:36.660562  133241 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:08:36.766355  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792916.738645658
	
	I1210 01:08:36.766375  133241 fix.go:216] guest clock: 1733792916.738645658
	I1210 01:08:36.766382  133241 fix.go:229] Guest: 2024-12-10 01:08:36.738645658 +0000 UTC Remote: 2024-12-10 01:08:36.656836618 +0000 UTC m=+237.074026661 (delta=81.80904ms)
	I1210 01:08:36.766420  133241 fix.go:200] guest clock delta is within tolerance: 81.80904ms
	I1210 01:08:36.766429  133241 start.go:83] releasing machines lock for "old-k8s-version-094470", held for 19.187587757s
	I1210 01:08:36.766461  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.766761  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:36.769758  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.770129  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.770150  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.770309  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.770818  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.770992  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.771090  133241 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:08:36.771157  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.771182  133241 ssh_runner.go:195] Run: cat /version.json
	I1210 01:08:36.771203  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.773923  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774103  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774272  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.774292  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774434  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.774545  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.774585  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774616  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.774728  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.774817  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.774843  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.774975  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.775004  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.775148  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.875634  133241 ssh_runner.go:195] Run: systemctl --version
	I1210 01:08:36.880774  133241 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:08:37.023282  133241 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:08:37.029380  133241 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:08:37.029436  133241 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:08:37.044071  133241 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:08:37.044093  133241 start.go:495] detecting cgroup driver to use...
	I1210 01:08:37.044157  133241 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:08:37.058626  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:08:37.070607  133241 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:08:37.070659  133241 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:08:37.086913  133241 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:08:37.102676  133241 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:08:37.221862  133241 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:08:37.373086  133241 docker.go:233] disabling docker service ...
	I1210 01:08:37.373166  133241 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:08:37.386711  133241 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:08:37.399414  133241 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:08:37.546237  133241 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:08:37.660681  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:08:37.673736  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:08:37.690107  133241 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1210 01:08:37.690180  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.700871  133241 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:08:37.700920  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.711545  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.722078  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.732603  133241 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:08:37.743617  133241 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:08:37.753641  133241 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:08:37.753699  133241 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:08:37.765737  133241 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:08:37.774173  133241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:37.891188  133241 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:08:37.983170  133241 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:08:37.983248  133241 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:08:37.987987  133241 start.go:563] Will wait 60s for crictl version
	I1210 01:08:37.988049  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:37.993150  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:08:38.045191  133241 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:08:38.045281  133241 ssh_runner.go:195] Run: crio --version
	I1210 01:08:38.071768  133241 ssh_runner.go:195] Run: crio --version
	I1210 01:08:38.100869  133241 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1210 01:08:38.102141  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:38.104790  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:38.105112  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:38.105143  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:38.105337  133241 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1210 01:08:38.109454  133241 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:38.120925  133241 kubeadm.go:883] updating cluster {Name:old-k8s-version-094470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:08:38.121060  133241 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1210 01:08:38.121130  133241 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:38.169400  133241 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1210 01:08:38.169462  133241 ssh_runner.go:195] Run: which lz4
	I1210 01:08:38.172973  133241 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 01:08:38.176684  133241 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 01:08:38.176715  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1210 01:08:38.285566  132693 pod_ready.go:103] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:38.784437  132693 pod_ready.go:93] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:08:38.784470  132693 pod_ready.go:82] duration metric: took 7.006865777s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:38.784480  132693 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:40.791489  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:38.076463  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting to get IP...
	I1210 01:08:38.077256  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.077650  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.077706  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:38.077616  134254 retry.go:31] will retry after 287.089061ms: waiting for machine to come up
	I1210 01:08:38.366347  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.366906  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.366937  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:38.366866  134254 retry.go:31] will retry after 359.654145ms: waiting for machine to come up
	I1210 01:08:38.728592  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.729111  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.729144  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:38.729048  134254 retry.go:31] will retry after 299.617496ms: waiting for machine to come up
	I1210 01:08:39.030785  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.031359  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.031382  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:39.031312  134254 retry.go:31] will retry after 586.950887ms: waiting for machine to come up
	I1210 01:08:39.620247  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.620872  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.620903  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:39.620802  134254 retry.go:31] will retry after 623.103267ms: waiting for machine to come up
	I1210 01:08:40.245322  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.245640  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.245669  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:40.245600  134254 retry.go:31] will retry after 712.603102ms: waiting for machine to come up
	I1210 01:08:40.960316  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.960862  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.960892  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:40.960806  134254 retry.go:31] will retry after 999.356089ms: waiting for machine to come up
	I1210 01:08:41.961395  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:41.961902  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:41.961929  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:41.961862  134254 retry.go:31] will retry after 1.050049361s: waiting for machine to come up
	I1210 01:08:39.654620  133241 crio.go:462] duration metric: took 1.481673499s to copy over tarball
	I1210 01:08:39.654705  133241 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 01:08:42.473447  133241 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.818699717s)
	I1210 01:08:42.473486  133241 crio.go:469] duration metric: took 2.818833041s to extract the tarball
	I1210 01:08:42.473496  133241 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 01:08:42.514635  133241 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:42.546161  133241 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1210 01:08:42.546204  133241 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 01:08:42.546276  133241 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:08:42.546315  133241 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.546339  133241 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.546344  133241 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.546347  133241 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.546306  133241 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1210 01:08:42.546372  133241 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.546315  133241 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.548134  133241 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.548150  133241 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1210 01:08:42.548149  133241 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.548162  133241 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:08:42.548134  133241 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.548135  133241 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.548138  133241 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.548326  133241 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.700402  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.706096  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.716669  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.717025  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.723380  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.727890  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.740867  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1210 01:08:42.775300  133241 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1210 01:08:42.775345  133241 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.775393  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.827802  133241 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1210 01:08:42.827855  133241 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.827873  133241 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1210 01:08:42.827906  133241 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.827936  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.827953  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.851952  133241 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1210 01:08:42.851998  133241 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.852063  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872369  133241 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1210 01:08:42.872408  133241 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.872446  133241 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1210 01:08:42.872479  133241 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1210 01:08:42.872489  133241 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.872497  133241 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1210 01:08:42.872516  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.872535  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872535  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872458  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872578  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.872638  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.872672  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.952963  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.952964  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 01:08:42.956464  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.956535  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.956580  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.956614  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.956681  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:43.035636  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 01:08:43.086938  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:43.087032  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:43.104765  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:43.104844  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:43.104891  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 01:08:43.109871  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:43.122137  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 01:08:43.193838  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:43.256301  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:43.256342  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1210 01:08:43.256431  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1210 01:08:43.258819  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1210 01:08:43.258928  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1210 01:08:43.259011  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1210 01:08:43.281411  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1210 01:08:43.300319  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1210 01:08:43.334327  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:08:43.478183  133241 cache_images.go:92] duration metric: took 931.957836ms to LoadCachedImages
	W1210 01:08:43.478292  133241 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1210 01:08:43.478310  133241 kubeadm.go:934] updating node { 192.168.61.11 8443 v1.20.0 crio true true} ...
	I1210 01:08:43.478501  133241 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-094470 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:08:43.478610  133241 ssh_runner.go:195] Run: crio config
	I1210 01:08:43.523627  133241 cni.go:84] Creating CNI manager for ""
	I1210 01:08:43.523651  133241 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:08:43.523660  133241 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:08:43.523680  133241 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.11 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-094470 NodeName:old-k8s-version-094470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1210 01:08:43.523872  133241 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-094470"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:08:43.523947  133241 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1210 01:08:43.534926  133241 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:08:43.535015  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:08:43.544420  133241 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1210 01:08:43.561582  133241 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:08:43.578427  133241 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1210 01:08:43.595593  133241 ssh_runner.go:195] Run: grep 192.168.61.11	control-plane.minikube.internal$ /etc/hosts
	I1210 01:08:43.599137  133241 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:43.610483  133241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:43.750543  133241 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:08:43.766573  133241 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470 for IP: 192.168.61.11
	I1210 01:08:43.766599  133241 certs.go:194] generating shared ca certs ...
	I1210 01:08:43.766628  133241 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:08:43.766828  133241 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:08:43.766881  133241 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:08:43.766897  133241 certs.go:256] generating profile certs ...
	I1210 01:08:43.767022  133241 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.key
	I1210 01:08:43.767097  133241 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.key.11e7a196
	I1210 01:08:43.767158  133241 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.key
	I1210 01:08:43.767318  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:08:43.767359  133241 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:08:43.767391  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:08:43.767428  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:08:43.767461  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:08:43.767502  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:08:43.767554  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:43.768599  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:08:43.825215  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:08:43.852218  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:08:43.888256  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:08:43.921633  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1210 01:08:43.954815  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 01:08:43.986660  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:08:44.009065  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 01:08:44.030476  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:08:44.053232  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:08:44.078371  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:08:44.100076  133241 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:08:44.115731  133241 ssh_runner.go:195] Run: openssl version
	I1210 01:08:44.121192  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:08:44.130554  133241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:08:44.134639  133241 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:08:44.134697  133241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:08:44.140323  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:08:44.150593  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:08:44.160638  133241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:44.165053  133241 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:44.165121  133241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:44.170391  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:08:44.180113  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:08:44.189938  133241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:08:44.193880  133241 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:08:44.193931  133241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:08:44.199419  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:08:44.209346  133241 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:08:44.213474  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:08:44.218965  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:08:44.224344  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:08:44.229835  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:08:44.235365  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:08:44.240697  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:08:44.245999  133241 kubeadm.go:392] StartCluster: {Name:old-k8s-version-094470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:08:44.246102  133241 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:08:44.246164  133241 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:44.287050  133241 cri.go:89] found id: ""
	I1210 01:08:44.287167  133241 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:08:44.297028  133241 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:08:44.297044  133241 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:08:44.297092  133241 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:08:44.306118  133241 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:08:44.307143  133241 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-094470" does not appear in /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:08:44.307777  133241 kubeconfig.go:62] /home/jenkins/minikube-integration/20062-79135/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-094470" cluster setting kubeconfig missing "old-k8s-version-094470" context setting]
	I1210 01:08:44.308663  133241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:08:44.394164  133241 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:08:44.406683  133241 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.11
	I1210 01:08:44.406723  133241 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:08:44.406739  133241 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:08:44.406799  133241 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:44.444917  133241 cri.go:89] found id: ""
	I1210 01:08:44.444995  133241 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:08:44.465693  133241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:08:44.475399  133241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:08:44.475424  133241 kubeadm.go:157] found existing configuration files:
	
	I1210 01:08:44.475482  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:08:44.483802  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:08:44.483844  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:08:44.492395  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:08:44.501080  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:08:44.501141  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:08:44.509973  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:08:44.518103  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:08:44.518176  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:08:44.527145  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:08:44.535124  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:08:44.535179  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:08:44.543773  133241 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:08:44.552533  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:42.791894  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:45.934242  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:43.013971  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:43.014430  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:43.014467  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:43.014369  134254 retry.go:31] will retry after 1.273602138s: waiting for machine to come up
	I1210 01:08:44.289131  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:44.289686  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:44.289720  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:44.289616  134254 retry.go:31] will retry after 1.911761795s: waiting for machine to come up
	I1210 01:08:46.203851  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:46.204263  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:46.204321  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:46.204199  134254 retry.go:31] will retry after 2.653257729s: waiting for machine to come up
	I1210 01:08:44.667527  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.368529  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.572674  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.671006  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.759483  133241 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:08:45.759588  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:46.260599  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:46.759851  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:47.260403  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:47.760555  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:48.259665  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:48.760390  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:49.260450  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:48.292324  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:50.789665  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:48.859690  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:48.860078  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:48.860108  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:48.860029  134254 retry.go:31] will retry after 3.186060231s: waiting for machine to come up
	I1210 01:08:52.048071  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:52.048524  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:52.048554  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:52.048478  134254 retry.go:31] will retry after 2.823038983s: waiting for machine to come up
	I1210 01:08:49.759795  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:50.260493  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:50.760146  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:51.259783  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:51.760554  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:52.260543  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:52.760452  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:53.260523  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:53.759677  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:54.259750  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:56.158844  132605 start.go:364] duration metric: took 51.38781342s to acquireMachinesLock for "no-preload-584179"
	I1210 01:08:56.158913  132605 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:08:56.158923  132605 fix.go:54] fixHost starting: 
	I1210 01:08:56.159339  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:08:56.159381  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:08:56.178552  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34315
	I1210 01:08:56.178997  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:08:56.179471  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:08:56.179497  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:08:56.179803  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:08:56.179977  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:08:56.180119  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:08:56.181496  132605 fix.go:112] recreateIfNeeded on no-preload-584179: state=Stopped err=<nil>
	I1210 01:08:56.181521  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	W1210 01:08:56.181661  132605 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:08:56.183508  132605 out.go:177] * Restarting existing kvm2 VM for "no-preload-584179" ...
	I1210 01:08:52.790210  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:54.790515  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:56.184725  132605 main.go:141] libmachine: (no-preload-584179) Calling .Start
	I1210 01:08:56.184883  132605 main.go:141] libmachine: (no-preload-584179) Ensuring networks are active...
	I1210 01:08:56.185680  132605 main.go:141] libmachine: (no-preload-584179) Ensuring network default is active
	I1210 01:08:56.186043  132605 main.go:141] libmachine: (no-preload-584179) Ensuring network mk-no-preload-584179 is active
	I1210 01:08:56.186427  132605 main.go:141] libmachine: (no-preload-584179) Getting domain xml...
	I1210 01:08:56.187126  132605 main.go:141] libmachine: (no-preload-584179) Creating domain...
	I1210 01:08:54.875474  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.875880  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has current primary IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.875902  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Found IP for machine: 192.168.39.193
	I1210 01:08:54.875918  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Reserving static IP address...
	I1210 01:08:54.876379  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-901295", mac: "52:54:00:f7:2f:3d", ip: "192.168.39.193"} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:54.876411  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Reserved static IP address: 192.168.39.193
	I1210 01:08:54.876434  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | skip adding static IP to network mk-default-k8s-diff-port-901295 - found existing host DHCP lease matching {name: "default-k8s-diff-port-901295", mac: "52:54:00:f7:2f:3d", ip: "192.168.39.193"}
	I1210 01:08:54.876456  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Getting to WaitForSSH function...
	I1210 01:08:54.876473  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for SSH to be available...
	I1210 01:08:54.878454  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.878758  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:54.878787  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.878940  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Using SSH client type: external
	I1210 01:08:54.878969  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa (-rw-------)
	I1210 01:08:54.878993  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.193 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:08:54.879003  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | About to run SSH command:
	I1210 01:08:54.879011  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | exit 0
	I1210 01:08:55.006046  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | SSH cmd err, output: <nil>: 
	I1210 01:08:55.006394  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetConfigRaw
	I1210 01:08:55.007100  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:55.009429  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.009753  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.009803  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.010054  133282 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/config.json ...
	I1210 01:08:55.010278  133282 machine.go:93] provisionDockerMachine start ...
	I1210 01:08:55.010302  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:55.010513  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.012899  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.013198  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.013248  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.013340  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.013509  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.013643  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.013726  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.013879  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.014070  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.014081  133282 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:08:55.126262  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:08:55.126294  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetMachineName
	I1210 01:08:55.126547  133282 buildroot.go:166] provisioning hostname "default-k8s-diff-port-901295"
	I1210 01:08:55.126592  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetMachineName
	I1210 01:08:55.126756  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.129397  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.129769  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.129798  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.129921  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.130071  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.130187  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.130279  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.130380  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.130545  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.130572  133282 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-901295 && echo "default-k8s-diff-port-901295" | sudo tee /etc/hostname
	I1210 01:08:55.256829  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-901295
	
	I1210 01:08:55.256857  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.259599  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.259977  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.260006  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.260257  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.260456  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.260645  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.260795  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.260996  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.261212  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.261239  133282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-901295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-901295/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-901295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:08:55.387808  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:08:55.387837  133282 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:08:55.387872  133282 buildroot.go:174] setting up certificates
	I1210 01:08:55.387883  133282 provision.go:84] configureAuth start
	I1210 01:08:55.387897  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetMachineName
	I1210 01:08:55.388193  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:55.391297  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.391649  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.391683  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.391799  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.393859  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.394150  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.394176  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.394272  133282 provision.go:143] copyHostCerts
	I1210 01:08:55.394336  133282 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:08:55.394353  133282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:08:55.394411  133282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:08:55.394501  133282 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:08:55.394508  133282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:08:55.394530  133282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:08:55.394615  133282 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:08:55.394624  133282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:08:55.394643  133282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:08:55.394693  133282 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-901295 san=[127.0.0.1 192.168.39.193 default-k8s-diff-port-901295 localhost minikube]
	I1210 01:08:55.502253  133282 provision.go:177] copyRemoteCerts
	I1210 01:08:55.502313  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:08:55.502341  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.504919  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.505216  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.505252  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.505425  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.505613  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.505749  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.505932  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:55.593242  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:08:55.616378  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 01:08:55.638786  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 01:08:55.660268  133282 provision.go:87] duration metric: took 272.369019ms to configureAuth
	I1210 01:08:55.660293  133282 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:08:55.660506  133282 config.go:182] Loaded profile config "default-k8s-diff-port-901295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:08:55.660597  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.662964  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.663283  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.663312  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.663461  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.663656  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.663820  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.663944  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.664091  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.664330  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.664354  133282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:08:55.918356  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:08:55.918389  133282 machine.go:96] duration metric: took 908.095325ms to provisionDockerMachine
	I1210 01:08:55.918402  133282 start.go:293] postStartSetup for "default-k8s-diff-port-901295" (driver="kvm2")
	I1210 01:08:55.918415  133282 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:08:55.918450  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:55.918790  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:08:55.918823  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.921575  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.921897  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.921929  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.922026  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.922205  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.922375  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.922485  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:56.008442  133282 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:08:56.012149  133282 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:08:56.012165  133282 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:08:56.012239  133282 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:08:56.012325  133282 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:08:56.012428  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:08:56.021144  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:56.042869  133282 start.go:296] duration metric: took 124.452091ms for postStartSetup
	I1210 01:08:56.042914  133282 fix.go:56] duration metric: took 19.276278483s for fixHost
	I1210 01:08:56.042940  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:56.045280  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.045612  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.045644  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.045845  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:56.046002  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.046123  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.046224  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:56.046353  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:56.046530  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:56.046541  133282 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:08:56.158690  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792936.125620375
	
	I1210 01:08:56.158714  133282 fix.go:216] guest clock: 1733792936.125620375
	I1210 01:08:56.158722  133282 fix.go:229] Guest: 2024-12-10 01:08:56.125620375 +0000 UTC Remote: 2024-12-10 01:08:56.042918319 +0000 UTC m=+253.475376365 (delta=82.702056ms)
	I1210 01:08:56.158741  133282 fix.go:200] guest clock delta is within tolerance: 82.702056ms
	I1210 01:08:56.158746  133282 start.go:83] releasing machines lock for "default-k8s-diff-port-901295", held for 19.392149024s
	I1210 01:08:56.158769  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.159017  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:56.161998  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.162321  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.162350  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.162541  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.163022  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.163197  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.163296  133282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:08:56.163346  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:56.163449  133282 ssh_runner.go:195] Run: cat /version.json
	I1210 01:08:56.163481  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:56.166071  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.166443  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.166475  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.166500  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.166750  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:56.166897  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.166920  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.166929  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.167083  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:56.167089  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:56.167255  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.167258  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:56.167400  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:56.167529  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:56.273144  133282 ssh_runner.go:195] Run: systemctl --version
	I1210 01:08:56.278678  133282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:08:56.423921  133282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:08:56.429467  133282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:08:56.429537  133282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:08:56.443900  133282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:08:56.443927  133282 start.go:495] detecting cgroup driver to use...
	I1210 01:08:56.443996  133282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:08:56.458653  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:08:56.471717  133282 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:08:56.471798  133282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:08:56.483960  133282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:08:56.495903  133282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:08:56.604493  133282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:08:56.741771  133282 docker.go:233] disabling docker service ...
	I1210 01:08:56.741846  133282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:08:56.755264  133282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:08:56.767590  133282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:08:56.922151  133282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:08:57.045410  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:08:57.061217  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:08:57.079488  133282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 01:08:57.079552  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.090356  133282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:08:57.090434  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.100784  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.111326  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.120417  133282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:08:57.129871  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.140489  133282 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.157524  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.167947  133282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:08:57.176904  133282 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:08:57.176947  133282 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:08:57.188925  133282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:08:57.197558  133282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:57.319427  133282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:08:57.419493  133282 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:08:57.419570  133282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:08:57.424302  133282 start.go:563] Will wait 60s for crictl version
	I1210 01:08:57.424362  133282 ssh_runner.go:195] Run: which crictl
	I1210 01:08:57.428067  133282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:08:57.468247  133282 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:08:57.468319  133282 ssh_runner.go:195] Run: crio --version
	I1210 01:08:57.497834  133282 ssh_runner.go:195] Run: crio --version
	I1210 01:08:57.527032  133282 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 01:08:57.528284  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:57.531510  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:57.531882  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:57.531908  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:57.532178  133282 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 01:08:57.536149  133282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:57.548081  133282 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:08:57.548221  133282 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:08:57.548283  133282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:57.585539  133282 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 01:08:57.585619  133282 ssh_runner.go:195] Run: which lz4
	I1210 01:08:57.590131  133282 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 01:08:57.595506  133282 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 01:08:57.595534  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1210 01:08:54.760444  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:55.259774  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:55.759929  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:56.260379  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:56.759985  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:57.260495  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:57.759699  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:58.260475  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:58.759732  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:59.260424  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:57.291502  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:59.792026  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:01.793182  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:57.453911  132605 main.go:141] libmachine: (no-preload-584179) Waiting to get IP...
	I1210 01:08:57.455000  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:57.455393  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:57.455472  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:57.455384  134419 retry.go:31] will retry after 189.932045ms: waiting for machine to come up
	I1210 01:08:57.646978  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:57.647486  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:57.647520  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:57.647418  134419 retry.go:31] will retry after 278.873511ms: waiting for machine to come up
	I1210 01:08:57.928222  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:57.928797  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:57.928837  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:57.928738  134419 retry.go:31] will retry after 468.940412ms: waiting for machine to come up
	I1210 01:08:58.399469  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:58.400105  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:58.400131  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:58.400041  134419 retry.go:31] will retry after 459.796386ms: waiting for machine to come up
	I1210 01:08:58.861581  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:58.862042  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:58.862075  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:58.861985  134419 retry.go:31] will retry after 493.349488ms: waiting for machine to come up
	I1210 01:08:59.356810  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:59.357338  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:59.357365  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:59.357314  134419 retry.go:31] will retry after 736.790492ms: waiting for machine to come up
	I1210 01:09:00.095779  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:00.096246  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:00.096281  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:00.096182  134419 retry.go:31] will retry after 1.059095907s: waiting for machine to come up
	I1210 01:09:01.157286  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:01.157718  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:01.157759  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:01.157656  134419 retry.go:31] will retry after 1.18137171s: waiting for machine to come up
	I1210 01:08:58.835009  133282 crio.go:462] duration metric: took 1.24490918s to copy over tarball
	I1210 01:08:58.835108  133282 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 01:09:00.985062  133282 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.149905713s)
	I1210 01:09:00.985097  133282 crio.go:469] duration metric: took 2.150055868s to extract the tarball
	I1210 01:09:00.985108  133282 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 01:09:01.032869  133282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:09:01.074578  133282 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 01:09:01.074609  133282 cache_images.go:84] Images are preloaded, skipping loading
	I1210 01:09:01.074618  133282 kubeadm.go:934] updating node { 192.168.39.193 8444 v1.31.2 crio true true} ...
	I1210 01:09:01.074727  133282 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-901295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:09:01.074794  133282 ssh_runner.go:195] Run: crio config
	I1210 01:09:01.133905  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:09:01.133943  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:01.133965  133282 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:09:01.133999  133282 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.193 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-901295 NodeName:default-k8s-diff-port-901295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 01:09:01.134201  133282 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.193
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-901295"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.193"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.193"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:09:01.134264  133282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 01:09:01.147844  133282 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:09:01.147931  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:09:01.160432  133282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 01:09:01.180526  133282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:09:01.200698  133282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1210 01:09:01.216799  133282 ssh_runner.go:195] Run: grep 192.168.39.193	control-plane.minikube.internal$ /etc/hosts
	I1210 01:09:01.220381  133282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.193	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:09:01.233079  133282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:01.361483  133282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:09:01.380679  133282 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295 for IP: 192.168.39.193
	I1210 01:09:01.380702  133282 certs.go:194] generating shared ca certs ...
	I1210 01:09:01.380722  133282 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:01.380921  133282 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:09:01.380994  133282 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:09:01.381010  133282 certs.go:256] generating profile certs ...
	I1210 01:09:01.381136  133282 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/client.key
	I1210 01:09:01.381229  133282 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/apiserver.key.b900309b
	I1210 01:09:01.381286  133282 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/proxy-client.key
	I1210 01:09:01.381437  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:09:01.381489  133282 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:09:01.381500  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:09:01.381537  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:09:01.381568  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:09:01.381598  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:09:01.381658  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:09:01.382643  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:09:01.437062  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:09:01.472383  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:09:01.503832  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:09:01.532159  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 01:09:01.555926  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 01:09:01.578213  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:09:01.599047  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 01:09:01.620628  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:09:01.643326  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:09:01.665846  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:09:01.688854  133282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:09:01.706519  133282 ssh_runner.go:195] Run: openssl version
	I1210 01:09:01.712053  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:09:01.722297  133282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:09:01.726404  133282 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:09:01.726491  133282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:09:01.731901  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:09:01.745040  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:09:01.758663  133282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:01.763894  133282 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:01.763945  133282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:01.771019  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:09:01.781071  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:09:01.790898  133282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:09:01.795494  133282 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:09:01.795557  133282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:09:01.800996  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:09:01.811221  133282 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:09:01.815412  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:09:01.821621  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:09:01.829028  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:09:01.838361  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:09:01.844663  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:09:01.850154  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:09:01.855539  133282 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:09:01.855625  133282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:09:01.855663  133282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:01.898021  133282 cri.go:89] found id: ""
	I1210 01:09:01.898095  133282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:09:01.908929  133282 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:09:01.908947  133282 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:09:01.909005  133282 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:09:01.917830  133282 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:09:01.918982  133282 kubeconfig.go:125] found "default-k8s-diff-port-901295" server: "https://192.168.39.193:8444"
	I1210 01:09:01.921394  133282 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:09:01.930263  133282 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.193
	I1210 01:09:01.930291  133282 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:09:01.930304  133282 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:09:01.930352  133282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:01.966094  133282 cri.go:89] found id: ""
	I1210 01:09:01.966195  133282 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:09:01.983212  133282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:09:01.991944  133282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:09:01.991963  133282 kubeadm.go:157] found existing configuration files:
	
	I1210 01:09:01.992011  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 01:09:02.000043  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:09:02.000094  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:09:02.008538  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 01:09:02.016658  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:09:02.016718  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:09:02.025191  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 01:09:02.033198  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:09:02.033235  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:09:02.041713  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 01:09:02.049752  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:09:02.049801  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:09:02.058162  133282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:09:02.067001  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:02.178210  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:59.760246  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:00.260582  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:00.760701  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:01.259686  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:01.759889  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:02.260232  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:02.759769  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:03.259935  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:03.760670  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.260443  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.289731  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:06.291608  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:02.340685  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:02.341201  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:02.341233  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:02.341148  134419 retry.go:31] will retry after 1.149002375s: waiting for machine to come up
	I1210 01:09:03.491439  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:03.491777  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:03.491803  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:03.491742  134419 retry.go:31] will retry after 2.260301884s: waiting for machine to come up
	I1210 01:09:05.753701  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:05.754207  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:05.754245  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:05.754151  134419 retry.go:31] will retry after 2.19021466s: waiting for machine to come up
	I1210 01:09:03.022068  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:03.230465  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:03.288423  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:03.380544  133282 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:09:03.380653  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:03.881388  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.381638  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.881652  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.380981  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.394784  133282 api_server.go:72] duration metric: took 2.014238708s to wait for apiserver process to appear ...
	I1210 01:09:05.394817  133282 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:09:05.394854  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:07.865790  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:07.865818  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:07.865831  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:07.881775  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:07.881807  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:07.894896  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:07.914874  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:07.914905  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:08.395143  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:08.404338  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:08.404370  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:08.895743  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:08.906401  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:08.906439  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:09.394905  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:09.400326  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 200:
	ok
	I1210 01:09:09.411040  133282 api_server.go:141] control plane version: v1.31.2
	I1210 01:09:09.411080  133282 api_server.go:131] duration metric: took 4.016246339s to wait for apiserver health ...
	I1210 01:09:09.411090  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:09:09.411096  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:09.412738  133282 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:09:04.760421  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.260154  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.760313  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:06.259902  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:06.760365  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:07.260060  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:07.759720  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:08.260052  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:08.759734  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:09.260736  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:08.291848  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:10.790539  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:07.946992  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:07.947528  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:07.947561  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:07.947474  134419 retry.go:31] will retry after 3.212306699s: waiting for machine to come up
	I1210 01:09:11.163716  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:11.164132  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:11.164163  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:11.164092  134419 retry.go:31] will retry after 3.275164589s: waiting for machine to come up
	I1210 01:09:09.413907  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:09:09.423631  133282 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:09:09.440030  133282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:09:09.449054  133282 system_pods.go:59] 8 kube-system pods found
	I1210 01:09:09.449081  133282 system_pods.go:61] "coredns-7c65d6cfc9-qbdpj" [eec04b43-145a-4cae-9085-185b573be507] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 01:09:09.449088  133282 system_pods.go:61] "etcd-default-k8s-diff-port-901295" [c8c570b0-2e66-4cf5-bed6-20ee655ad679] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 01:09:09.449100  133282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-901295" [42b2ad48-8b92-4ba4-8a14-6c3e6bdec4a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 01:09:09.449116  133282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-901295" [bd2c0e9d-cb31-46a5-b12e-ab70ed05c8e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 01:09:09.449127  133282 system_pods.go:61] "kube-proxy-5szz9" [957bab4d-6329-41b4-9980-aaa17133201e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 01:09:09.449135  133282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-901295" [1729b062-1bfe-447f-b9ed-29813c7f056a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 01:09:09.449144  133282 system_pods.go:61] "metrics-server-6867b74b74-zpj2g" [cdfb5b8e-5b7f-4fc8-8ad8-07ea92f7f737] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:09:09.449150  133282 system_pods.go:61] "storage-provisioner" [342f814b-f510-4a3b-b27d-52ebbdf85275] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 01:09:09.449159  133282 system_pods.go:74] duration metric: took 9.110007ms to wait for pod list to return data ...
	I1210 01:09:09.449168  133282 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:09:09.452778  133282 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:09:09.452806  133282 node_conditions.go:123] node cpu capacity is 2
	I1210 01:09:09.452818  133282 node_conditions.go:105] duration metric: took 3.643268ms to run NodePressure ...
	I1210 01:09:09.452837  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:09.728171  133282 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 01:09:09.732074  133282 kubeadm.go:739] kubelet initialised
	I1210 01:09:09.732096  133282 kubeadm.go:740] duration metric: took 3.900542ms waiting for restarted kubelet to initialise ...
	I1210 01:09:09.732106  133282 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:09.736406  133282 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.740516  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.740534  133282 pod_ready.go:82] duration metric: took 4.104848ms for pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.740543  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.740549  133282 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.744293  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.744311  133282 pod_ready.go:82] duration metric: took 3.755781ms for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.744321  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.744326  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.748023  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.748045  133282 pod_ready.go:82] duration metric: took 3.712559ms for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.748062  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.748070  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.843581  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.843607  133282 pod_ready.go:82] duration metric: took 95.52817ms for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.843621  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.843632  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5szz9" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:10.242986  133282 pod_ready.go:93] pod "kube-proxy-5szz9" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:10.243015  133282 pod_ready.go:82] duration metric: took 399.37468ms for pod "kube-proxy-5szz9" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:10.243025  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:12.249815  133282 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:09.760075  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:10.259970  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:10.760547  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:11.259999  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:11.760315  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:12.260121  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:12.760217  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:13.259996  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:13.760635  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:14.259738  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:13.290686  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:15.792057  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:14.440802  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.441315  132605 main.go:141] libmachine: (no-preload-584179) Found IP for machine: 192.168.50.169
	I1210 01:09:14.441338  132605 main.go:141] libmachine: (no-preload-584179) Reserving static IP address...
	I1210 01:09:14.441355  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has current primary IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.441776  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "no-preload-584179", mac: "52:54:00:94:5e:a7", ip: "192.168.50.169"} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.441830  132605 main.go:141] libmachine: (no-preload-584179) DBG | skip adding static IP to network mk-no-preload-584179 - found existing host DHCP lease matching {name: "no-preload-584179", mac: "52:54:00:94:5e:a7", ip: "192.168.50.169"}
	I1210 01:09:14.441847  132605 main.go:141] libmachine: (no-preload-584179) Reserved static IP address: 192.168.50.169
	I1210 01:09:14.441867  132605 main.go:141] libmachine: (no-preload-584179) Waiting for SSH to be available...
	I1210 01:09:14.441882  132605 main.go:141] libmachine: (no-preload-584179) DBG | Getting to WaitForSSH function...
	I1210 01:09:14.444063  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.444360  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.444397  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.444510  132605 main.go:141] libmachine: (no-preload-584179) DBG | Using SSH client type: external
	I1210 01:09:14.444531  132605 main.go:141] libmachine: (no-preload-584179) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa (-rw-------)
	I1210 01:09:14.444565  132605 main.go:141] libmachine: (no-preload-584179) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:09:14.444579  132605 main.go:141] libmachine: (no-preload-584179) DBG | About to run SSH command:
	I1210 01:09:14.444594  132605 main.go:141] libmachine: (no-preload-584179) DBG | exit 0
	I1210 01:09:14.571597  132605 main.go:141] libmachine: (no-preload-584179) DBG | SSH cmd err, output: <nil>: 
	I1210 01:09:14.571997  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetConfigRaw
	I1210 01:09:14.572831  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:14.576075  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.576525  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.576559  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.576843  132605 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/config.json ...
	I1210 01:09:14.577023  132605 machine.go:93] provisionDockerMachine start ...
	I1210 01:09:14.577043  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:14.577263  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.579535  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.579894  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.579925  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.580191  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:14.580426  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.580579  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.580742  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:14.580901  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:14.581081  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:14.581092  132605 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:09:14.699453  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:09:14.699485  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:09:14.699734  132605 buildroot.go:166] provisioning hostname "no-preload-584179"
	I1210 01:09:14.699766  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:09:14.700011  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.703169  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.703570  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.703597  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.703742  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:14.703967  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.704170  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.704395  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:14.704582  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:14.704802  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:14.704825  132605 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-584179 && echo "no-preload-584179" | sudo tee /etc/hostname
	I1210 01:09:14.836216  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-584179
	
	I1210 01:09:14.836259  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.839077  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.839502  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.839536  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.839752  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:14.839958  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.840127  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.840304  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:14.840534  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:14.840766  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:14.840793  132605 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-584179' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-584179/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-584179' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:09:14.965138  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:09:14.965175  132605 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:09:14.965246  132605 buildroot.go:174] setting up certificates
	I1210 01:09:14.965268  132605 provision.go:84] configureAuth start
	I1210 01:09:14.965287  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:09:14.965570  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:14.968666  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.969081  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.969116  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.969264  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.971772  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.972144  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.972169  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.972337  132605 provision.go:143] copyHostCerts
	I1210 01:09:14.972403  132605 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:09:14.972428  132605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:09:14.972492  132605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:09:14.972648  132605 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:09:14.972663  132605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:09:14.972698  132605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:09:14.972790  132605 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:09:14.972803  132605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:09:14.972836  132605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:09:14.972915  132605 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.no-preload-584179 san=[127.0.0.1 192.168.50.169 localhost minikube no-preload-584179]
	I1210 01:09:15.113000  132605 provision.go:177] copyRemoteCerts
	I1210 01:09:15.113067  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:09:15.113100  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.115838  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.116216  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.116243  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.116422  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.116590  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.116726  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.116820  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.199896  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:09:15.225440  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 01:09:15.250028  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 01:09:15.274086  132605 provision.go:87] duration metric: took 308.801497ms to configureAuth
	I1210 01:09:15.274127  132605 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:09:15.274298  132605 config.go:182] Loaded profile config "no-preload-584179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:09:15.274390  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.277149  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.277509  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.277539  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.277682  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.277842  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.277999  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.278110  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.278260  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:15.278438  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:15.278454  132605 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:09:15.504997  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:09:15.505080  132605 machine.go:96] duration metric: took 928.040946ms to provisionDockerMachine
	I1210 01:09:15.505103  132605 start.go:293] postStartSetup for "no-preload-584179" (driver="kvm2")
	I1210 01:09:15.505118  132605 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:09:15.505150  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.505498  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:09:15.505532  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.508802  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.509247  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.509324  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.509448  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.509674  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.509840  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.509985  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.597115  132605 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:09:15.602107  132605 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:09:15.602135  132605 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:09:15.602226  132605 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:09:15.602330  132605 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:09:15.602453  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:09:15.611320  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:09:15.633173  132605 start.go:296] duration metric: took 128.055577ms for postStartSetup
	I1210 01:09:15.633214  132605 fix.go:56] duration metric: took 19.474291224s for fixHost
	I1210 01:09:15.633234  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.635888  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.636254  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.636298  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.636472  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.636655  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.636827  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.636941  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.637115  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:15.637284  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:15.637295  132605 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:09:15.746834  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792955.705138377
	
	I1210 01:09:15.746862  132605 fix.go:216] guest clock: 1733792955.705138377
	I1210 01:09:15.746873  132605 fix.go:229] Guest: 2024-12-10 01:09:15.705138377 +0000 UTC Remote: 2024-12-10 01:09:15.6332178 +0000 UTC m=+353.450037611 (delta=71.920577ms)
	I1210 01:09:15.746899  132605 fix.go:200] guest clock delta is within tolerance: 71.920577ms
	I1210 01:09:15.746915  132605 start.go:83] releasing machines lock for "no-preload-584179", held for 19.588029336s
	I1210 01:09:15.746945  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.747285  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:15.750451  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.750900  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.750929  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.751162  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.751698  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.751882  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.751964  132605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:09:15.752035  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.752082  132605 ssh_runner.go:195] Run: cat /version.json
	I1210 01:09:15.752104  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.754825  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755065  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755249  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.755269  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755457  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.755549  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.755585  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755624  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.755718  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.755807  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.755929  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.755997  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.756266  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.756431  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.834820  132605 ssh_runner.go:195] Run: systemctl --version
	I1210 01:09:15.859263  132605 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:09:16.006149  132605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:09:16.012040  132605 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:09:16.012116  132605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:09:16.026410  132605 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:09:16.026435  132605 start.go:495] detecting cgroup driver to use...
	I1210 01:09:16.026508  132605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:09:16.040833  132605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:09:16.053355  132605 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:09:16.053404  132605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:09:16.066169  132605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:09:16.078906  132605 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:09:16.183645  132605 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:09:16.338131  132605 docker.go:233] disabling docker service ...
	I1210 01:09:16.338210  132605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:09:16.353706  132605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:09:16.367025  132605 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:09:16.490857  132605 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:09:16.599213  132605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:09:16.612423  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:09:16.628989  132605 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 01:09:16.629051  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.638381  132605 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:09:16.638443  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.648140  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.657702  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.667303  132605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:09:16.677058  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.686261  132605 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.701267  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.710630  132605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:09:16.719338  132605 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:09:16.719399  132605 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:09:16.730675  132605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:09:16.739704  132605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:16.855267  132605 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:09:16.945551  132605 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:09:16.945636  132605 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:09:16.950041  132605 start.go:563] Will wait 60s for crictl version
	I1210 01:09:16.950089  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:16.953415  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:09:16.986363  132605 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:09:16.986452  132605 ssh_runner.go:195] Run: crio --version
	I1210 01:09:17.013313  132605 ssh_runner.go:195] Run: crio --version
	I1210 01:09:17.040732  132605 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 01:09:17.042078  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:17.044697  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:17.044992  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:17.045017  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:17.045180  132605 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1210 01:09:17.048776  132605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:09:17.059862  132605 kubeadm.go:883] updating cluster {Name:no-preload-584179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-584179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:09:17.059969  132605 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:09:17.060002  132605 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:09:17.092954  132605 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 01:09:17.092981  132605 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 01:09:17.093021  132605 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:17.093063  132605 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.093076  132605 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.093096  132605 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1210 01:09:17.093157  132605 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.093084  132605 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.093235  132605 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.093250  132605 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.094742  132605 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1210 01:09:17.094787  132605 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.094804  132605 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.094742  132605 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.094810  132605 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:17.094753  132605 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.094820  132605 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.094765  132605 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:14.765671  133282 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:15.750454  133282 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:15.750473  133282 pod_ready.go:82] duration metric: took 5.507439947s for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:15.750486  133282 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:14.759976  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:15.259717  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:15.760410  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:16.260034  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:16.759708  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:17.260433  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:17.760687  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:18.260284  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:18.760557  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:19.260362  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:18.290233  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:20.291198  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:17.246846  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.248658  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.250095  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.254067  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.256089  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.278344  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1210 01:09:17.278473  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.369439  132605 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1210 01:09:17.369501  132605 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.369501  132605 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1210 01:09:17.369540  132605 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.369553  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.369604  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.417953  132605 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1210 01:09:17.418006  132605 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.418052  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.423233  132605 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1210 01:09:17.423274  132605 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1210 01:09:17.423281  132605 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.423306  132605 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.423326  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.423352  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.423429  132605 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1210 01:09:17.423469  132605 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.423503  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.505918  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.505973  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.505933  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.506033  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.506057  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.506093  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.622808  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.635839  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.637443  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.637478  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.637587  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.637611  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.688747  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.768097  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.768175  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.768211  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.768320  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.768313  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.805141  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1210 01:09:17.805252  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1210 01:09:17.885468  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1210 01:09:17.885628  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1210 01:09:17.893263  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1210 01:09:17.893312  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1210 01:09:17.893335  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1210 01:09:17.893381  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 01:09:17.893399  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1210 01:09:17.893411  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 01:09:17.893417  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 01:09:17.893464  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1210 01:09:17.893479  132605 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1210 01:09:17.893454  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 01:09:17.893518  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1210 01:09:17.895148  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1210 01:09:18.009923  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:21.497870  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.604325674s)
	I1210 01:09:21.497908  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1210 01:09:21.497931  132605 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1210 01:09:21.497925  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (3.604515411s)
	I1210 01:09:21.497964  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2: (3.604523853s)
	I1210 01:09:21.497980  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1210 01:09:21.497988  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1210 01:09:21.497968  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1210 01:09:21.498030  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (3.604504871s)
	I1210 01:09:21.498065  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1210 01:09:21.498092  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (3.604626001s)
	I1210 01:09:21.498135  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1210 01:09:21.498137  132605 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.48818734s)
	I1210 01:09:21.498180  132605 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 01:09:21.498210  132605 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:21.498262  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.758044  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:20.257446  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:19.759901  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:20.260224  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:20.760523  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:21.259846  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:21.759997  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:22.259939  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:22.760414  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:23.260359  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:23.760075  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:24.260519  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:22.291428  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:24.291612  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:26.791400  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:23.369885  132605 ssh_runner.go:235] Completed: which crictl: (1.871582184s)
	I1210 01:09:23.369947  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.871938064s)
	I1210 01:09:23.369967  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1210 01:09:23.369976  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:23.370000  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 01:09:23.370042  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 01:09:25.661942  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.291860829s)
	I1210 01:09:25.661984  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1210 01:09:25.661990  132605 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.291995779s)
	I1210 01:09:25.662011  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 01:09:25.662066  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 01:09:25.662066  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:27.025354  132605 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.36318975s)
	I1210 01:09:27.025446  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:27.025517  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.363423006s)
	I1210 01:09:27.025546  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1210 01:09:27.025604  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 01:09:27.025677  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 01:09:27.063571  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 01:09:27.063700  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 01:09:22.757215  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:24.757584  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:27.256535  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:24.760537  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:25.259994  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:25.760205  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:26.260504  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:26.759648  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:27.259995  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:27.760383  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:28.259992  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:28.760004  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:29.260496  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:28.813963  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:30.837175  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:29.106253  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.080542846s)
	I1210 01:09:29.106295  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1210 01:09:29.106312  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.042586527s)
	I1210 01:09:29.106326  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 01:09:29.106345  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1210 01:09:29.106392  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 01:09:30.968622  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.862203504s)
	I1210 01:09:30.968650  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1210 01:09:30.968679  132605 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 01:09:30.968732  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 01:09:31.612519  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 01:09:31.612559  132605 cache_images.go:123] Successfully loaded all cached images
	I1210 01:09:31.612564  132605 cache_images.go:92] duration metric: took 14.519573158s to LoadCachedImages
	I1210 01:09:31.612577  132605 kubeadm.go:934] updating node { 192.168.50.169 8443 v1.31.2 crio true true} ...
	I1210 01:09:31.612686  132605 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-584179 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-584179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:09:31.612750  132605 ssh_runner.go:195] Run: crio config
	I1210 01:09:31.661155  132605 cni.go:84] Creating CNI manager for ""
	I1210 01:09:31.661185  132605 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:31.661199  132605 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:09:31.661228  132605 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.169 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-584179 NodeName:no-preload-584179 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.169"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.169 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 01:09:31.661406  132605 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.169
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-584179"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.169"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.169"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:09:31.661511  132605 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 01:09:31.671185  132605 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:09:31.671259  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:09:31.679776  132605 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1210 01:09:31.694290  132605 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:09:31.708644  132605 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1210 01:09:31.725292  132605 ssh_runner.go:195] Run: grep 192.168.50.169	control-plane.minikube.internal$ /etc/hosts
	I1210 01:09:31.729070  132605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.169	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:09:31.740077  132605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:31.857074  132605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:09:31.872257  132605 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179 for IP: 192.168.50.169
	I1210 01:09:31.872280  132605 certs.go:194] generating shared ca certs ...
	I1210 01:09:31.872314  132605 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:31.872515  132605 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:09:31.872569  132605 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:09:31.872579  132605 certs.go:256] generating profile certs ...
	I1210 01:09:31.872694  132605 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/client.key
	I1210 01:09:31.872775  132605 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/apiserver.key.0a939830
	I1210 01:09:31.872828  132605 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/proxy-client.key
	I1210 01:09:31.872979  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:09:31.873020  132605 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:09:31.873034  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:09:31.873069  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:09:31.873098  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:09:31.873127  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:09:31.873188  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:09:31.874099  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:09:31.906792  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:09:31.939994  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:09:31.965628  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:09:31.992020  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 01:09:32.015601  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 01:09:32.048113  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:09:32.069416  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 01:09:32.090144  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:09:32.111484  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:09:32.135390  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:09:32.157978  132605 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:09:32.173851  132605 ssh_runner.go:195] Run: openssl version
	I1210 01:09:32.179068  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:09:32.188602  132605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:09:32.192585  132605 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:09:32.192629  132605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:09:32.197637  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:09:32.207401  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:09:32.216700  132605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:29.756368  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:31.756948  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:29.760244  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:30.260534  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:30.760426  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:31.259767  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:31.759951  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:32.259919  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:32.760161  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:33.260272  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:33.759885  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:34.259970  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:33.290818  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:35.790889  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:32.220620  132605 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:32.220663  132605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:32.225661  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:09:32.235325  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:09:32.244746  132605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:09:32.248733  132605 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:09:32.248774  132605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:09:32.254022  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:09:32.264208  132605 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:09:32.268332  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:09:32.273902  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:09:32.279525  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:09:32.284958  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:09:32.291412  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:09:32.296527  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:09:32.302123  132605 kubeadm.go:392] StartCluster: {Name:no-preload-584179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-584179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:09:32.302233  132605 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:09:32.302293  132605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:32.345135  132605 cri.go:89] found id: ""
	I1210 01:09:32.345212  132605 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:09:32.355077  132605 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:09:32.355093  132605 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:09:32.355131  132605 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:09:32.364021  132605 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:09:32.365012  132605 kubeconfig.go:125] found "no-preload-584179" server: "https://192.168.50.169:8443"
	I1210 01:09:32.367348  132605 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:09:32.375938  132605 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.169
	I1210 01:09:32.375967  132605 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:09:32.375979  132605 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:09:32.376032  132605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:32.408948  132605 cri.go:89] found id: ""
	I1210 01:09:32.409014  132605 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:09:32.427628  132605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:09:32.437321  132605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:09:32.437348  132605 kubeadm.go:157] found existing configuration files:
	
	I1210 01:09:32.437391  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:09:32.446114  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:09:32.446155  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:09:32.455531  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:09:32.465558  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:09:32.465611  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:09:32.475265  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:09:32.483703  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:09:32.483750  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:09:32.492041  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:09:32.499895  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:09:32.499948  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:09:32.508205  132605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:09:32.516625  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:32.628252  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:33.675979  132605 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.04768244s)
	I1210 01:09:33.676029  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:33.873465  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:33.951722  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:34.064512  132605 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:09:34.064627  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:34.565753  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.065163  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.104915  132605 api_server.go:72] duration metric: took 1.040405424s to wait for apiserver process to appear ...
	I1210 01:09:35.104944  132605 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:09:35.104970  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:35.105426  132605 api_server.go:269] stopped: https://192.168.50.169:8443/healthz: Get "https://192.168.50.169:8443/healthz": dial tcp 192.168.50.169:8443: connect: connection refused
	I1210 01:09:35.606063  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:34.256982  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:36.756184  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:38.326687  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:38.326719  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:38.326736  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:38.400207  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:38.400236  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:38.605572  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:38.610811  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:38.610849  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:39.105424  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:39.117268  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:39.117303  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:39.605417  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:39.614444  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 200:
	ok
	I1210 01:09:39.620993  132605 api_server.go:141] control plane version: v1.31.2
	I1210 01:09:39.621020  132605 api_server.go:131] duration metric: took 4.51606815s to wait for apiserver health ...
	I1210 01:09:39.621032  132605 cni.go:84] Creating CNI manager for ""
	I1210 01:09:39.621041  132605 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:34.759835  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.260276  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.759791  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:36.259684  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:36.760649  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:37.259922  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:37.760558  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:38.260712  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:38.759679  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:39.259678  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:39.622539  132605 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:09:39.623685  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:09:39.643844  132605 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:09:39.678622  132605 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:09:39.692082  132605 system_pods.go:59] 8 kube-system pods found
	I1210 01:09:39.692124  132605 system_pods.go:61] "coredns-7c65d6cfc9-hhsm5" [dddb227a-7c16-4acd-be5f-1ab38b78129c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 01:09:39.692133  132605 system_pods.go:61] "etcd-no-preload-584179" [acba2a13-196f-4ff9-8151-6b88578d532d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 01:09:39.692141  132605 system_pods.go:61] "kube-apiserver-no-preload-584179" [20a67076-4ff2-4f31-b245-bf4079cd11d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 01:09:39.692149  132605 system_pods.go:61] "kube-controller-manager-no-preload-584179" [d7a2e531-1609-4c8a-b756-d9819339ed27] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 01:09:39.692154  132605 system_pods.go:61] "kube-proxy-xcjs2" [ec6cf5b1-3ea9-4868-874d-61e262cca0c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 01:09:39.692162  132605 system_pods.go:61] "kube-scheduler-no-preload-584179" [998543da-3056-4960-8762-9ab3dbd1925a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 01:09:39.692174  132605 system_pods.go:61] "metrics-server-6867b74b74-lwgxd" [0e7f1063-8508-4f5b-b8ff-bbd387a53919] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:09:39.692183  132605 system_pods.go:61] "storage-provisioner" [31180637-f48e-4dda-8ec3-56155bb300cf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 01:09:39.692200  132605 system_pods.go:74] duration metric: took 13.548523ms to wait for pod list to return data ...
	I1210 01:09:39.692214  132605 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:09:39.696707  132605 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:09:39.696740  132605 node_conditions.go:123] node cpu capacity is 2
	I1210 01:09:39.696754  132605 node_conditions.go:105] duration metric: took 4.534393ms to run NodePressure ...
	I1210 01:09:39.696781  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:39.977595  132605 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 01:09:39.981694  132605 kubeadm.go:739] kubelet initialised
	I1210 01:09:39.981714  132605 kubeadm.go:740] duration metric: took 4.094235ms waiting for restarted kubelet to initialise ...
	I1210 01:09:39.981724  132605 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:39.987484  132605 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:39.992414  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.992434  132605 pod_ready.go:82] duration metric: took 4.925954ms for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:39.992442  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.992448  132605 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:39.996262  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "etcd-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.996291  132605 pod_ready.go:82] duration metric: took 3.826925ms for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:39.996301  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "etcd-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.996309  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.000642  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-apiserver-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.000659  132605 pod_ready.go:82] duration metric: took 4.340955ms for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.000668  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-apiserver-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.000676  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.082165  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.082191  132605 pod_ready.go:82] duration metric: took 81.505218ms for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.082204  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.082214  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.483273  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-proxy-xcjs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.483306  132605 pod_ready.go:82] duration metric: took 401.082947ms for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.483318  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-proxy-xcjs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.483329  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.882587  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-scheduler-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.882617  132605 pod_ready.go:82] duration metric: took 399.278598ms for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.882629  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-scheduler-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.882641  132605 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:41.281474  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:41.281502  132605 pod_ready.go:82] duration metric: took 398.850415ms for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:41.281516  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:41.281526  132605 pod_ready.go:39] duration metric: took 1.299793175s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:41.281547  132605 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 01:09:41.293293  132605 ops.go:34] apiserver oom_adj: -16
	I1210 01:09:41.293310  132605 kubeadm.go:597] duration metric: took 8.938211553s to restartPrimaryControlPlane
	I1210 01:09:41.293318  132605 kubeadm.go:394] duration metric: took 8.991203373s to StartCluster
	I1210 01:09:41.293334  132605 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:41.293389  132605 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:09:41.295054  132605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:41.295293  132605 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 01:09:41.295376  132605 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 01:09:41.295496  132605 addons.go:69] Setting storage-provisioner=true in profile "no-preload-584179"
	I1210 01:09:41.295519  132605 addons.go:234] Setting addon storage-provisioner=true in "no-preload-584179"
	W1210 01:09:41.295529  132605 addons.go:243] addon storage-provisioner should already be in state true
	I1210 01:09:41.295527  132605 config.go:182] Loaded profile config "no-preload-584179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:09:41.295581  132605 host.go:66] Checking if "no-preload-584179" exists ...
	I1210 01:09:41.295588  132605 addons.go:69] Setting metrics-server=true in profile "no-preload-584179"
	I1210 01:09:41.295602  132605 addons.go:234] Setting addon metrics-server=true in "no-preload-584179"
	I1210 01:09:41.295604  132605 addons.go:69] Setting default-storageclass=true in profile "no-preload-584179"
	W1210 01:09:41.295615  132605 addons.go:243] addon metrics-server should already be in state true
	I1210 01:09:41.295627  132605 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-584179"
	I1210 01:09:41.295643  132605 host.go:66] Checking if "no-preload-584179" exists ...
	I1210 01:09:41.295906  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.295951  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.296035  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.296052  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.296089  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.296134  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.296994  132605 out.go:177] * Verifying Kubernetes components...
	I1210 01:09:41.298351  132605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:41.312841  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32935
	I1210 01:09:41.313326  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.313883  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.313906  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.314202  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.314798  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.314846  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.316718  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45347
	I1210 01:09:41.317263  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.317829  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.317857  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.318269  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.318870  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.318916  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.329929  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45969
	I1210 01:09:41.330341  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.330879  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.330894  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.331331  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.331505  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.332041  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43911
	I1210 01:09:41.332457  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.333084  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.333107  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.333516  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.333728  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.335268  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42073
	I1210 01:09:41.336123  132605 addons.go:234] Setting addon default-storageclass=true in "no-preload-584179"
	W1210 01:09:41.336137  132605 addons.go:243] addon default-storageclass should already be in state true
	I1210 01:09:41.336161  132605 host.go:66] Checking if "no-preload-584179" exists ...
	I1210 01:09:41.336395  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.336422  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.336596  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.336686  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:41.337074  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.337088  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.337468  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.337656  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.338494  132605 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:41.339130  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:41.339843  132605 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:09:41.339856  132605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 01:09:41.339870  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:41.341253  132605 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 01:09:37.793895  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:40.291282  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:41.342436  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.342604  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 01:09:41.342620  132605 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 01:09:41.342633  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:41.342844  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:41.342861  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.343122  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:41.343399  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:41.343569  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:41.343683  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:41.345344  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.345814  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:41.345834  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.345982  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:41.346159  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:41.346293  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:41.346431  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:41.352593  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38283
	I1210 01:09:41.352930  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.353292  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.353307  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.353545  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.354016  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.354045  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.370168  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46765
	I1210 01:09:41.370736  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.371289  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.371315  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.371670  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.371879  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.373679  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:41.374802  132605 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 01:09:41.374821  132605 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 01:09:41.374841  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:41.377611  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.378065  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:41.378089  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.378261  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:41.378411  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:41.378571  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:41.378711  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:41.492956  132605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:09:41.510713  132605 node_ready.go:35] waiting up to 6m0s for node "no-preload-584179" to be "Ready" ...
	I1210 01:09:41.591523  132605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:09:41.612369  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 01:09:41.612393  132605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 01:09:41.641040  132605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 01:09:41.672955  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 01:09:41.672982  132605 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 01:09:41.720885  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:09:41.720921  132605 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 01:09:41.773885  132605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:09:39.256804  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:41.758321  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:42.945125  132605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.304042618s)
	I1210 01:09:42.945192  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945207  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945233  132605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.171304002s)
	I1210 01:09:42.945292  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945310  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945452  132605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.353900883s)
	I1210 01:09:42.945476  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945488  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945543  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.945556  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.945587  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.945601  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.945609  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945616  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945819  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.945847  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.945832  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.945856  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945863  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945897  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.945907  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.945916  132605 addons.go:475] Verifying addon metrics-server=true in "no-preload-584179"
	I1210 01:09:42.945926  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.946083  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.946115  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.946120  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.946659  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.946679  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.946690  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.946699  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.946960  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.946976  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.954783  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.954805  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.955037  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.955056  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.955101  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.956592  132605 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1210 01:09:39.759613  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:40.260466  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:40.760527  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:41.260450  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:41.759950  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:42.260075  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:42.760661  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:43.259780  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:43.759690  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:44.260376  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:42.791249  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:45.290804  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:42.957891  132605 addons.go:510] duration metric: took 1.66252058s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I1210 01:09:43.514278  132605 node_ready.go:53] node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:45.514855  132605 node_ready.go:53] node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:44.256730  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:46.257699  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:44.759802  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:45.260533  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:45.760410  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:45.760500  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:45.797499  133241 cri.go:89] found id: ""
	I1210 01:09:45.797522  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.797533  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:45.797539  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:45.797596  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:45.827841  133241 cri.go:89] found id: ""
	I1210 01:09:45.827872  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.827885  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:45.827893  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:45.827952  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:45.861227  133241 cri.go:89] found id: ""
	I1210 01:09:45.861251  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.861259  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:45.861264  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:45.861331  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:45.895142  133241 cri.go:89] found id: ""
	I1210 01:09:45.895174  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.895185  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:45.895191  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:45.895266  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:45.931113  133241 cri.go:89] found id: ""
	I1210 01:09:45.931146  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.931157  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:45.931164  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:45.931251  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:45.964348  133241 cri.go:89] found id: ""
	I1210 01:09:45.964388  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.964396  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:45.964402  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:45.964453  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:45.997808  133241 cri.go:89] found id: ""
	I1210 01:09:45.997829  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.997837  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:45.997842  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:45.997888  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:46.028464  133241 cri.go:89] found id: ""
	I1210 01:09:46.028490  133241 logs.go:282] 0 containers: []
	W1210 01:09:46.028499  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:46.028508  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:46.028524  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:46.136225  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:46.136257  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:46.136275  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:46.211654  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:46.211686  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:46.254008  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:46.254046  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:46.305985  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:46.306020  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:48.818889  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:48.831511  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:48.831575  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:48.863536  133241 cri.go:89] found id: ""
	I1210 01:09:48.863566  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.863577  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:48.863585  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:48.863642  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:48.895340  133241 cri.go:89] found id: ""
	I1210 01:09:48.895362  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.895371  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:48.895378  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:48.895439  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:48.930962  133241 cri.go:89] found id: ""
	I1210 01:09:48.930989  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.930997  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:48.931003  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:48.931060  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:48.966437  133241 cri.go:89] found id: ""
	I1210 01:09:48.966467  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.966479  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:48.966488  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:48.966553  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:49.001290  133241 cri.go:89] found id: ""
	I1210 01:09:49.001321  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.001333  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:49.001340  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:49.001404  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:49.036472  133241 cri.go:89] found id: ""
	I1210 01:09:49.036499  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.036510  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:49.036532  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:49.036609  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:49.066550  133241 cri.go:89] found id: ""
	I1210 01:09:49.066589  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.066600  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:49.066607  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:49.066669  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:49.097358  133241 cri.go:89] found id: ""
	I1210 01:09:49.097383  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.097392  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:49.097402  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:49.097413  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:49.170082  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:49.170116  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:49.209684  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:49.209747  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:49.268714  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:49.268755  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:49.281979  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:49.282014  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:49.350901  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:47.790228  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:49.791158  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:48.014087  132605 node_ready.go:53] node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:49.014932  132605 node_ready.go:49] node "no-preload-584179" has status "Ready":"True"
	I1210 01:09:49.014960  132605 node_ready.go:38] duration metric: took 7.504211405s for node "no-preload-584179" to be "Ready" ...
	I1210 01:09:49.014974  132605 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:49.020519  132605 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:49.025466  132605 pod_ready.go:93] pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:49.025489  132605 pod_ready.go:82] duration metric: took 4.945455ms for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:49.025501  132605 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.031580  132605 pod_ready.go:103] pod "etcd-no-preload-584179" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:51.532544  132605 pod_ready.go:93] pod "etcd-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.532570  132605 pod_ready.go:82] duration metric: took 2.507060173s for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.532582  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.537498  132605 pod_ready.go:93] pod "kube-apiserver-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.537516  132605 pod_ready.go:82] duration metric: took 4.927374ms for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.537525  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.542147  132605 pod_ready.go:93] pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.542161  132605 pod_ready.go:82] duration metric: took 4.630752ms for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.542169  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.546645  132605 pod_ready.go:93] pod "kube-proxy-xcjs2" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.546660  132605 pod_ready.go:82] duration metric: took 4.486291ms for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.546667  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.815308  132605 pod_ready.go:93] pod "kube-scheduler-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.815333  132605 pod_ready.go:82] duration metric: took 268.661005ms for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.815343  132605 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:48.756571  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:51.256434  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:51.851559  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:51.864804  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:51.864862  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:51.907102  133241 cri.go:89] found id: ""
	I1210 01:09:51.907141  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.907154  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:51.907162  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:51.907218  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:51.937672  133241 cri.go:89] found id: ""
	I1210 01:09:51.937695  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.937702  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:51.937708  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:51.937755  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:51.966886  133241 cri.go:89] found id: ""
	I1210 01:09:51.966911  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.966919  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:51.966925  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:51.966981  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:51.996806  133241 cri.go:89] found id: ""
	I1210 01:09:51.996830  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.996838  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:51.996844  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:51.996901  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:52.028041  133241 cri.go:89] found id: ""
	I1210 01:09:52.028083  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.028091  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:52.028097  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:52.028150  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:52.057921  133241 cri.go:89] found id: ""
	I1210 01:09:52.057946  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.057954  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:52.057960  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:52.058010  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:52.088367  133241 cri.go:89] found id: ""
	I1210 01:09:52.088406  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.088415  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:52.088422  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:52.088487  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:52.117636  133241 cri.go:89] found id: ""
	I1210 01:09:52.117667  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.117679  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:52.117691  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:52.117705  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:52.151628  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:52.151655  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:52.202083  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:52.202116  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:52.214973  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:52.215009  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:52.282101  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:52.282126  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:52.282139  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:52.290617  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:54.790008  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:56.790504  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:53.820512  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:55.824852  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:53.258005  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:55.755992  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:54.862326  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:54.874349  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:54.874418  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:54.906983  133241 cri.go:89] found id: ""
	I1210 01:09:54.907006  133241 logs.go:282] 0 containers: []
	W1210 01:09:54.907013  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:54.907019  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:54.907069  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:54.938187  133241 cri.go:89] found id: ""
	I1210 01:09:54.938213  133241 logs.go:282] 0 containers: []
	W1210 01:09:54.938221  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:54.938226  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:54.938290  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:54.974481  133241 cri.go:89] found id: ""
	I1210 01:09:54.974514  133241 logs.go:282] 0 containers: []
	W1210 01:09:54.974526  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:54.974534  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:54.974619  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:55.005904  133241 cri.go:89] found id: ""
	I1210 01:09:55.005928  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.005941  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:55.005949  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:55.006015  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:55.037698  133241 cri.go:89] found id: ""
	I1210 01:09:55.037729  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.037741  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:55.037748  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:55.037816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:55.067926  133241 cri.go:89] found id: ""
	I1210 01:09:55.067958  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.067966  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:55.067971  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:55.068016  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:55.098309  133241 cri.go:89] found id: ""
	I1210 01:09:55.098333  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.098341  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:55.098349  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:55.098400  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:55.145177  133241 cri.go:89] found id: ""
	I1210 01:09:55.145212  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.145221  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:55.145231  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:55.145243  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:55.193307  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:55.193338  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:55.205536  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:55.205558  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:55.271248  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:55.271276  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:55.271295  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:55.349465  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:55.349503  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:57.887749  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:57.899698  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:57.899765  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:57.933170  133241 cri.go:89] found id: ""
	I1210 01:09:57.933196  133241 logs.go:282] 0 containers: []
	W1210 01:09:57.933206  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:57.933214  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:57.933282  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:57.964237  133241 cri.go:89] found id: ""
	I1210 01:09:57.964271  133241 logs.go:282] 0 containers: []
	W1210 01:09:57.964284  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:57.964292  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:57.964360  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:57.996447  133241 cri.go:89] found id: ""
	I1210 01:09:57.996481  133241 logs.go:282] 0 containers: []
	W1210 01:09:57.996493  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:57.996501  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:57.996562  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:58.030007  133241 cri.go:89] found id: ""
	I1210 01:09:58.030034  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.030046  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:58.030054  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:58.030120  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:58.063634  133241 cri.go:89] found id: ""
	I1210 01:09:58.063667  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.063678  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:58.063686  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:58.063748  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:58.095076  133241 cri.go:89] found id: ""
	I1210 01:09:58.095105  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.095114  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:58.095120  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:58.095177  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:58.127107  133241 cri.go:89] found id: ""
	I1210 01:09:58.127147  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.127160  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:58.127169  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:58.127243  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:58.158137  133241 cri.go:89] found id: ""
	I1210 01:09:58.158167  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.158177  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:58.158190  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:58.158213  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:58.209195  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:58.209236  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:58.221816  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:58.221841  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:58.290396  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:58.290416  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:58.290430  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:58.370235  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:58.370265  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:58.791561  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:01.290503  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:58.321571  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:00.322349  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:58.256526  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:00.756754  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:00.908076  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:00.920898  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:00.920985  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:00.955432  133241 cri.go:89] found id: ""
	I1210 01:10:00.955469  133241 logs.go:282] 0 containers: []
	W1210 01:10:00.955481  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:00.955490  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:00.955550  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:00.987580  133241 cri.go:89] found id: ""
	I1210 01:10:00.987606  133241 logs.go:282] 0 containers: []
	W1210 01:10:00.987615  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:00.987621  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:00.987670  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:01.018741  133241 cri.go:89] found id: ""
	I1210 01:10:01.018766  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.018773  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:01.018781  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:01.018840  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:01.049817  133241 cri.go:89] found id: ""
	I1210 01:10:01.049849  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.049860  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:01.049879  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:01.049946  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:01.081736  133241 cri.go:89] found id: ""
	I1210 01:10:01.081765  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.081775  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:01.081781  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:01.081829  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:01.110990  133241 cri.go:89] found id: ""
	I1210 01:10:01.111015  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.111026  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:01.111034  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:01.111096  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:01.140737  133241 cri.go:89] found id: ""
	I1210 01:10:01.140767  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.140777  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:01.140785  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:01.140848  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:01.170628  133241 cri.go:89] found id: ""
	I1210 01:10:01.170662  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.170674  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:01.170686  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:01.170701  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:01.222358  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:01.222389  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:01.235640  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:01.235668  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:01.302726  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:01.302745  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:01.302762  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:01.383817  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:01.383855  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:03.921112  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:03.933517  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:03.933592  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:03.967318  133241 cri.go:89] found id: ""
	I1210 01:10:03.967344  133241 logs.go:282] 0 containers: []
	W1210 01:10:03.967353  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:03.967358  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:03.967411  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:03.998743  133241 cri.go:89] found id: ""
	I1210 01:10:03.998768  133241 logs.go:282] 0 containers: []
	W1210 01:10:03.998776  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:03.998782  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:03.998842  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:04.033209  133241 cri.go:89] found id: ""
	I1210 01:10:04.033235  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.033247  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:04.033255  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:04.033319  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:04.064815  133241 cri.go:89] found id: ""
	I1210 01:10:04.064845  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.064857  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:04.064864  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:04.064921  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:04.098676  133241 cri.go:89] found id: ""
	I1210 01:10:04.098699  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.098707  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:04.098712  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:04.098763  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:04.129693  133241 cri.go:89] found id: ""
	I1210 01:10:04.129720  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.129732  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:04.129741  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:04.129809  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:04.162158  133241 cri.go:89] found id: ""
	I1210 01:10:04.162195  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.162203  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:04.162209  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:04.162276  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:04.194376  133241 cri.go:89] found id: ""
	I1210 01:10:04.194425  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.194436  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:04.194446  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:04.194462  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:04.246674  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:04.246702  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:04.259142  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:04.259169  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:04.330034  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:04.330054  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:04.330067  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:04.410042  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:04.410089  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:03.790690  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:06.290723  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:02.821628  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:04.822691  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:06.823821  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:03.256410  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:05.756520  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:06.948623  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:06.960727  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:06.960811  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:06.993176  133241 cri.go:89] found id: ""
	I1210 01:10:06.993217  133241 logs.go:282] 0 containers: []
	W1210 01:10:06.993226  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:06.993231  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:06.993285  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:07.026420  133241 cri.go:89] found id: ""
	I1210 01:10:07.026449  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.026462  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:07.026469  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:07.026541  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:07.060810  133241 cri.go:89] found id: ""
	I1210 01:10:07.060837  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.060847  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:07.060855  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:07.060921  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:07.091336  133241 cri.go:89] found id: ""
	I1210 01:10:07.091376  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.091386  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:07.091393  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:07.091510  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:07.122715  133241 cri.go:89] found id: ""
	I1210 01:10:07.122750  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.122762  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:07.122770  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:07.122822  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:07.154444  133241 cri.go:89] found id: ""
	I1210 01:10:07.154479  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.154490  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:07.154496  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:07.154575  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:07.189571  133241 cri.go:89] found id: ""
	I1210 01:10:07.189601  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.189614  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:07.189622  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:07.189683  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:07.224455  133241 cri.go:89] found id: ""
	I1210 01:10:07.224480  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.224489  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:07.224499  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:07.224512  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:07.240174  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:07.240214  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:07.344027  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:07.344062  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:07.344079  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:07.445219  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:07.445263  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:07.483205  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:07.483238  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:08.291335  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:10.789606  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:09.321098  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:11.321721  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:08.256670  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:10.256954  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:12.257117  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:10.034238  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:10.047042  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:10.047105  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:10.078622  133241 cri.go:89] found id: ""
	I1210 01:10:10.078654  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.078666  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:10.078675  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:10.078737  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:10.109353  133241 cri.go:89] found id: ""
	I1210 01:10:10.109379  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.109390  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:10.109398  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:10.109470  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:10.143036  133241 cri.go:89] found id: ""
	I1210 01:10:10.143065  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.143077  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:10.143084  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:10.143150  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:10.174938  133241 cri.go:89] found id: ""
	I1210 01:10:10.174966  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.174975  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:10.174981  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:10.175032  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:10.208680  133241 cri.go:89] found id: ""
	I1210 01:10:10.208709  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.208718  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:10.208724  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:10.208793  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:10.241153  133241 cri.go:89] found id: ""
	I1210 01:10:10.241189  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.241202  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:10.241213  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:10.241290  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:10.279405  133241 cri.go:89] found id: ""
	I1210 01:10:10.279437  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.279448  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:10.279457  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:10.279523  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:10.317915  133241 cri.go:89] found id: ""
	I1210 01:10:10.317943  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.317953  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:10.317964  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:10.317980  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:10.370920  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:10.370955  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:10.385823  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:10.385867  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:10.452746  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:10.452774  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:10.452793  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:10.535218  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:10.535291  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:13.075172  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:13.090707  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:13.090785  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:13.141780  133241 cri.go:89] found id: ""
	I1210 01:10:13.141804  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.141812  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:13.141818  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:13.141869  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:13.172241  133241 cri.go:89] found id: ""
	I1210 01:10:13.172263  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.172271  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:13.172277  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:13.172339  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:13.200378  133241 cri.go:89] found id: ""
	I1210 01:10:13.200401  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.200410  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:13.200415  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:13.200472  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:13.232921  133241 cri.go:89] found id: ""
	I1210 01:10:13.232952  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.232964  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:13.232972  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:13.233088  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:13.265305  133241 cri.go:89] found id: ""
	I1210 01:10:13.265333  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.265344  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:13.265352  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:13.265411  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:13.299192  133241 cri.go:89] found id: ""
	I1210 01:10:13.299216  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.299226  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:13.299233  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:13.299306  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:13.332156  133241 cri.go:89] found id: ""
	I1210 01:10:13.332184  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.332195  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:13.332202  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:13.332259  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:13.365450  133241 cri.go:89] found id: ""
	I1210 01:10:13.365484  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.365498  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:13.365511  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:13.365529  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:13.440807  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:13.440849  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:13.477283  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:13.477325  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:13.527481  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:13.527514  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:13.540146  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:13.540178  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:13.602711  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:12.790714  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:15.290963  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:13.820293  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:15.821845  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:14.755454  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:16.756574  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:16.103789  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:16.116124  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:16.116204  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:16.153057  133241 cri.go:89] found id: ""
	I1210 01:10:16.153082  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.153102  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:16.153109  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:16.153162  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:16.186489  133241 cri.go:89] found id: ""
	I1210 01:10:16.186517  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.186528  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:16.186535  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:16.186613  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:16.216369  133241 cri.go:89] found id: ""
	I1210 01:10:16.216404  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.216415  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:16.216423  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:16.216482  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:16.246254  133241 cri.go:89] found id: ""
	I1210 01:10:16.246282  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.246292  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:16.246299  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:16.246361  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:16.277815  133241 cri.go:89] found id: ""
	I1210 01:10:16.277844  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.277855  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:16.277866  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:16.277931  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:16.312101  133241 cri.go:89] found id: ""
	I1210 01:10:16.312132  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.312141  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:16.312147  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:16.312202  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:16.350273  133241 cri.go:89] found id: ""
	I1210 01:10:16.350299  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.350307  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:16.350313  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:16.350376  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:16.388091  133241 cri.go:89] found id: ""
	I1210 01:10:16.388113  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.388121  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:16.388130  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:16.388150  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:16.456039  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:16.456066  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:16.456085  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:16.534919  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:16.534950  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:16.581598  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:16.581639  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:16.631479  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:16.631515  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:19.143852  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:19.156229  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:19.156300  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:19.186482  133241 cri.go:89] found id: ""
	I1210 01:10:19.186506  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.186514  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:19.186521  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:19.186585  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:19.216945  133241 cri.go:89] found id: ""
	I1210 01:10:19.216967  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.216975  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:19.216983  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:19.217060  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:19.247628  133241 cri.go:89] found id: ""
	I1210 01:10:19.247656  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.247666  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:19.247672  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:19.247719  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:19.281256  133241 cri.go:89] found id: ""
	I1210 01:10:19.281287  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.281297  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:19.281303  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:19.281364  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:19.315123  133241 cri.go:89] found id: ""
	I1210 01:10:19.315156  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.315168  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:19.315176  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:19.315246  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:19.349687  133241 cri.go:89] found id: ""
	I1210 01:10:19.349714  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.349725  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:19.349733  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:19.349797  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:19.381019  133241 cri.go:89] found id: ""
	I1210 01:10:19.381046  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.381058  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:19.381065  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:19.381129  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:19.413983  133241 cri.go:89] found id: ""
	I1210 01:10:19.414023  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.414035  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:19.414048  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:19.414063  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:19.453812  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:19.453848  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:19.504016  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:19.504049  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:19.517665  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:19.517695  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:19.583777  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:19.583807  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:19.583825  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:17.790792  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:20.290934  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:17.821893  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:20.320787  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:19.256192  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:21.256740  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:22.160219  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:22.172908  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:22.172984  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:22.203634  133241 cri.go:89] found id: ""
	I1210 01:10:22.203665  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.203680  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:22.203689  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:22.203754  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:22.233632  133241 cri.go:89] found id: ""
	I1210 01:10:22.233660  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.233671  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:22.233679  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:22.233748  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:22.269679  133241 cri.go:89] found id: ""
	I1210 01:10:22.269704  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.269713  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:22.269719  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:22.269769  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:22.301819  133241 cri.go:89] found id: ""
	I1210 01:10:22.301850  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.301858  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:22.301864  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:22.301914  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:22.337435  133241 cri.go:89] found id: ""
	I1210 01:10:22.337470  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.337479  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:22.337494  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:22.337562  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:22.368920  133241 cri.go:89] found id: ""
	I1210 01:10:22.368944  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.368952  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:22.368957  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:22.369020  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:22.401157  133241 cri.go:89] found id: ""
	I1210 01:10:22.401188  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.401200  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:22.401211  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:22.401277  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:22.436278  133241 cri.go:89] found id: ""
	I1210 01:10:22.436317  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.436330  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:22.436343  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:22.436359  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:22.485320  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:22.485354  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:22.498225  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:22.498253  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:22.559918  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:22.559944  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:22.559961  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:22.636884  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:22.636919  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:22.291705  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:24.790056  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:26.790414  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:22.322051  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:24.821800  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:23.756797  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:25.757544  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:25.173302  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:25.185398  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:25.185481  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:25.215003  133241 cri.go:89] found id: ""
	I1210 01:10:25.215030  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.215038  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:25.215044  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:25.215106  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:25.247583  133241 cri.go:89] found id: ""
	I1210 01:10:25.247604  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.247613  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:25.247620  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:25.247679  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:25.282125  133241 cri.go:89] found id: ""
	I1210 01:10:25.282150  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.282158  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:25.282163  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:25.282220  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:25.317560  133241 cri.go:89] found id: ""
	I1210 01:10:25.317590  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.317599  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:25.317605  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:25.317666  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:25.354392  133241 cri.go:89] found id: ""
	I1210 01:10:25.354418  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.354430  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:25.354441  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:25.354510  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:25.392349  133241 cri.go:89] found id: ""
	I1210 01:10:25.392375  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.392384  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:25.392390  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:25.392442  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:25.429665  133241 cri.go:89] found id: ""
	I1210 01:10:25.429692  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.429702  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:25.429709  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:25.429766  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:25.466437  133241 cri.go:89] found id: ""
	I1210 01:10:25.466463  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.466476  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:25.466488  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:25.466503  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:25.480846  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:25.480885  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:25.548828  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:25.548861  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:25.548877  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:25.626942  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:25.626985  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:25.664081  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:25.664120  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:28.219032  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:28.233820  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:28.233886  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:28.267033  133241 cri.go:89] found id: ""
	I1210 01:10:28.267061  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.267072  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:28.267079  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:28.267133  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:28.304241  133241 cri.go:89] found id: ""
	I1210 01:10:28.304268  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.304276  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:28.304282  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:28.304329  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:28.339783  133241 cri.go:89] found id: ""
	I1210 01:10:28.339810  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.339817  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:28.339824  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:28.339897  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:28.371890  133241 cri.go:89] found id: ""
	I1210 01:10:28.371944  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.371957  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:28.371965  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:28.372033  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:28.409995  133241 cri.go:89] found id: ""
	I1210 01:10:28.410031  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.410042  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:28.410050  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:28.410122  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:28.443817  133241 cri.go:89] found id: ""
	I1210 01:10:28.443854  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.443866  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:28.443874  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:28.443943  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:28.476813  133241 cri.go:89] found id: ""
	I1210 01:10:28.476842  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.476850  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:28.476856  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:28.476918  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:28.509092  133241 cri.go:89] found id: ""
	I1210 01:10:28.509119  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.509129  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:28.509147  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:28.509166  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:28.582990  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:28.583021  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:28.624120  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:28.624152  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:28.673901  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:28.673942  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:28.686654  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:28.686684  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:28.754914  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:28.790925  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:31.291799  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:27.321458  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:29.820474  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:31.820865  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:28.257390  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:30.757194  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:31.256019  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:31.269297  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:31.269374  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:31.306032  133241 cri.go:89] found id: ""
	I1210 01:10:31.306063  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.306074  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:31.306082  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:31.306149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:31.339930  133241 cri.go:89] found id: ""
	I1210 01:10:31.339964  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.339976  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:31.339984  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:31.340049  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:31.371820  133241 cri.go:89] found id: ""
	I1210 01:10:31.371853  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.371865  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:31.371872  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:31.371929  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:31.406853  133241 cri.go:89] found id: ""
	I1210 01:10:31.406880  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.406888  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:31.406895  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:31.406973  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:31.441927  133241 cri.go:89] found id: ""
	I1210 01:10:31.441964  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.441983  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:31.441993  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:31.442059  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:31.475302  133241 cri.go:89] found id: ""
	I1210 01:10:31.475335  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.475347  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:31.475356  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:31.475422  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:31.508445  133241 cri.go:89] found id: ""
	I1210 01:10:31.508479  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.508489  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:31.508495  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:31.508549  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:31.542658  133241 cri.go:89] found id: ""
	I1210 01:10:31.542686  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.542694  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:31.542704  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:31.542720  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:31.591393  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:31.591432  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:31.604124  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:31.604152  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:31.670342  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:31.670381  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:31.670401  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:31.755216  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:31.755273  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:34.307218  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:34.321878  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:34.321951  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:34.355191  133241 cri.go:89] found id: ""
	I1210 01:10:34.355230  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.355238  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:34.355244  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:34.355300  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:34.392397  133241 cri.go:89] found id: ""
	I1210 01:10:34.392432  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.392445  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:34.392453  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:34.392522  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:34.424468  133241 cri.go:89] found id: ""
	I1210 01:10:34.424496  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.424513  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:34.424519  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:34.424568  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:34.456966  133241 cri.go:89] found id: ""
	I1210 01:10:34.456990  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.457000  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:34.457006  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:34.457057  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:34.491830  133241 cri.go:89] found id: ""
	I1210 01:10:34.491863  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.491874  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:34.491882  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:34.491949  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:34.523409  133241 cri.go:89] found id: ""
	I1210 01:10:34.523441  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.523455  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:34.523464  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:34.523520  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:34.555092  133241 cri.go:89] found id: ""
	I1210 01:10:34.555125  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.555136  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:34.555143  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:34.555211  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:34.585491  133241 cri.go:89] found id: ""
	I1210 01:10:34.585521  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.585530  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:34.585540  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:34.585553  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:34.598250  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:34.598281  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:10:33.790899  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:35.791148  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:34.321870  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:36.821430  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:32.757323  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:35.256735  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:37.257310  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	W1210 01:10:34.662759  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:34.662784  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:34.662797  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:34.740495  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:34.740537  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:34.777192  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:34.777231  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:37.329212  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:37.342322  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:37.342397  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:37.374083  133241 cri.go:89] found id: ""
	I1210 01:10:37.374114  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.374124  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:37.374133  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:37.374202  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:37.404838  133241 cri.go:89] found id: ""
	I1210 01:10:37.404872  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.404880  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:37.404886  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:37.404948  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:37.439471  133241 cri.go:89] found id: ""
	I1210 01:10:37.439503  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.439515  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:37.439523  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:37.439598  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:37.473725  133241 cri.go:89] found id: ""
	I1210 01:10:37.473756  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.473765  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:37.473770  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:37.473822  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:37.507449  133241 cri.go:89] found id: ""
	I1210 01:10:37.507478  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.507491  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:37.507498  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:37.507565  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:37.538432  133241 cri.go:89] found id: ""
	I1210 01:10:37.538468  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.538479  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:37.538490  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:37.538583  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:37.571690  133241 cri.go:89] found id: ""
	I1210 01:10:37.571716  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.571724  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:37.571730  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:37.571787  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:37.606988  133241 cri.go:89] found id: ""
	I1210 01:10:37.607017  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.607026  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:37.607036  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:37.607048  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:37.655260  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:37.655290  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:37.667647  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:37.667672  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:37.734898  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:37.734955  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:37.734971  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:37.823654  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:37.823690  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:37.792020  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:40.290220  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:39.323412  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:41.822486  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:39.759358  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:42.256854  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:40.361513  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:40.374995  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:40.375054  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:40.407043  133241 cri.go:89] found id: ""
	I1210 01:10:40.407077  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.407086  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:40.407091  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:40.407146  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:40.438613  133241 cri.go:89] found id: ""
	I1210 01:10:40.438644  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.438655  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:40.438663  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:40.438725  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:40.468747  133241 cri.go:89] found id: ""
	I1210 01:10:40.468781  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.468794  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:40.468801  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:40.468873  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:40.501670  133241 cri.go:89] found id: ""
	I1210 01:10:40.501700  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.501708  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:40.501714  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:40.501762  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:40.531671  133241 cri.go:89] found id: ""
	I1210 01:10:40.531694  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.531704  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:40.531712  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:40.531769  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:40.562804  133241 cri.go:89] found id: ""
	I1210 01:10:40.562827  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.562836  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:40.562847  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:40.562909  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:40.593286  133241 cri.go:89] found id: ""
	I1210 01:10:40.593309  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.593318  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:40.593323  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:40.593369  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:40.624387  133241 cri.go:89] found id: ""
	I1210 01:10:40.624424  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.624438  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:40.624452  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:40.624479  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:40.636616  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:40.636643  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:40.703044  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:40.703071  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:40.703089  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:40.782186  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:40.782220  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:40.824410  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:40.824434  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:43.377460  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:43.391624  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:43.391704  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:43.424454  133241 cri.go:89] found id: ""
	I1210 01:10:43.424489  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.424499  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:43.424505  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:43.424570  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:43.454067  133241 cri.go:89] found id: ""
	I1210 01:10:43.454094  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.454102  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:43.454108  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:43.454160  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:43.485905  133241 cri.go:89] found id: ""
	I1210 01:10:43.485938  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.485949  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:43.485956  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:43.486021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:43.516402  133241 cri.go:89] found id: ""
	I1210 01:10:43.516427  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.516435  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:43.516447  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:43.516521  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:43.549049  133241 cri.go:89] found id: ""
	I1210 01:10:43.549102  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.549114  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:43.549124  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:43.549181  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:43.582610  133241 cri.go:89] found id: ""
	I1210 01:10:43.582641  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.582652  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:43.582661  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:43.582720  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:43.614392  133241 cri.go:89] found id: ""
	I1210 01:10:43.614424  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.614435  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:43.614442  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:43.614507  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:43.646797  133241 cri.go:89] found id: ""
	I1210 01:10:43.646830  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.646842  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:43.646855  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:43.646872  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:43.682884  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:43.682921  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:43.739117  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:43.739159  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:43.754008  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:43.754047  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:43.825110  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:43.825140  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:43.825156  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:42.290697  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:44.790711  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.791942  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:44.321563  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.821954  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:44.756178  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.757399  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.401040  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:46.414417  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:46.414515  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:46.446832  133241 cri.go:89] found id: ""
	I1210 01:10:46.446861  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.446871  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:46.446879  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:46.446945  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:46.480534  133241 cri.go:89] found id: ""
	I1210 01:10:46.480566  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.480577  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:46.480584  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:46.480649  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:46.512706  133241 cri.go:89] found id: ""
	I1210 01:10:46.512735  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.512745  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:46.512752  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:46.512818  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:46.545769  133241 cri.go:89] found id: ""
	I1210 01:10:46.545803  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.545815  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:46.545823  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:46.545889  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:46.575715  133241 cri.go:89] found id: ""
	I1210 01:10:46.575750  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.575762  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:46.575769  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:46.575834  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:46.605133  133241 cri.go:89] found id: ""
	I1210 01:10:46.605164  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.605175  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:46.605183  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:46.605235  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:46.635536  133241 cri.go:89] found id: ""
	I1210 01:10:46.635571  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.635582  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:46.635589  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:46.635650  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:46.665579  133241 cri.go:89] found id: ""
	I1210 01:10:46.665608  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.665617  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:46.665627  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:46.665637  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:46.749766  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:46.749806  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:46.788690  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:46.788725  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:46.841860  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:46.841888  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:46.870621  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:46.870651  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:46.943532  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:49.444707  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:49.457003  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:49.457071  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:49.489757  133241 cri.go:89] found id: ""
	I1210 01:10:49.489791  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.489802  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:49.489809  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:49.489859  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:49.519808  133241 cri.go:89] found id: ""
	I1210 01:10:49.519832  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.519839  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:49.519844  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:49.519895  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:49.552725  133241 cri.go:89] found id: ""
	I1210 01:10:49.552748  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.552756  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:49.552762  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:49.552816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:49.583657  133241 cri.go:89] found id: ""
	I1210 01:10:49.583686  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.583699  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:49.583710  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:49.583771  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:49.614520  133241 cri.go:89] found id: ""
	I1210 01:10:49.614547  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.614569  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:49.614579  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:49.614644  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:49.290385  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:51.291504  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:49.321277  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:51.321612  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:49.256723  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:51.257348  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:49.646739  133241 cri.go:89] found id: ""
	I1210 01:10:49.646788  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.646800  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:49.646811  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:49.646871  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:49.680156  133241 cri.go:89] found id: ""
	I1210 01:10:49.680184  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.680195  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:49.680203  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:49.680271  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:49.711052  133241 cri.go:89] found id: ""
	I1210 01:10:49.711090  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.711103  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:49.711115  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:49.711133  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:49.765139  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:49.765173  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:49.777581  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:49.777612  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:49.842857  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:49.842882  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:49.842897  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:49.923492  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:49.923529  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:52.465282  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:52.478468  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:52.478535  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:52.514379  133241 cri.go:89] found id: ""
	I1210 01:10:52.514411  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.514420  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:52.514426  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:52.514481  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:52.545952  133241 cri.go:89] found id: ""
	I1210 01:10:52.545981  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.545991  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:52.545999  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:52.546063  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:52.581959  133241 cri.go:89] found id: ""
	I1210 01:10:52.581986  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.581995  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:52.582003  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:52.582109  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:52.634648  133241 cri.go:89] found id: ""
	I1210 01:10:52.634674  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.634686  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:52.634693  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:52.634753  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:52.668485  133241 cri.go:89] found id: ""
	I1210 01:10:52.668509  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.668518  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:52.668524  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:52.668587  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:52.702030  133241 cri.go:89] found id: ""
	I1210 01:10:52.702058  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.702067  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:52.702074  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:52.702139  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:52.736618  133241 cri.go:89] found id: ""
	I1210 01:10:52.736647  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.736655  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:52.736661  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:52.736728  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:52.769400  133241 cri.go:89] found id: ""
	I1210 01:10:52.769427  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.769436  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:52.769444  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:52.769462  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:52.808900  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:52.808936  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:52.861032  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:52.861067  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:52.874251  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:52.874281  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:52.946117  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:52.946145  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:52.946174  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:53.790452  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:55.791486  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:53.820716  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:55.822118  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:53.756664  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:56.255828  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:55.526812  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:55.541146  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:55.541232  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:55.582382  133241 cri.go:89] found id: ""
	I1210 01:10:55.582414  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.582424  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:55.582430  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:55.582483  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:55.620756  133241 cri.go:89] found id: ""
	I1210 01:10:55.620781  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.620790  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:55.620795  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:55.620865  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:55.657136  133241 cri.go:89] found id: ""
	I1210 01:10:55.657173  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.657184  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:55.657192  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:55.657253  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:55.691809  133241 cri.go:89] found id: ""
	I1210 01:10:55.691836  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.691844  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:55.691850  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:55.691901  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:55.725747  133241 cri.go:89] found id: ""
	I1210 01:10:55.725782  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.725794  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:55.725802  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:55.725870  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:55.758656  133241 cri.go:89] found id: ""
	I1210 01:10:55.758686  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.758697  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:55.758704  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:55.758766  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:55.791407  133241 cri.go:89] found id: ""
	I1210 01:10:55.791437  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.791447  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:55.791453  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:55.791522  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:55.823238  133241 cri.go:89] found id: ""
	I1210 01:10:55.823259  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.823269  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:55.823277  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:55.823288  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:55.858051  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:55.858090  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:55.910896  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:55.910928  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:55.923792  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:55.923814  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:55.994264  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:55.994283  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:55.994297  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:58.570410  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:58.582632  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:58.582709  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:58.614706  133241 cri.go:89] found id: ""
	I1210 01:10:58.614741  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.614752  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:58.614759  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:58.614820  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:58.645853  133241 cri.go:89] found id: ""
	I1210 01:10:58.645880  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.645888  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:58.645893  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:58.645946  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:58.681278  133241 cri.go:89] found id: ""
	I1210 01:10:58.681305  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.681313  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:58.681319  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:58.681376  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:58.715312  133241 cri.go:89] found id: ""
	I1210 01:10:58.715344  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.715356  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:58.715364  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:58.715434  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:58.753150  133241 cri.go:89] found id: ""
	I1210 01:10:58.753182  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.753193  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:58.753201  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:58.753275  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:58.792337  133241 cri.go:89] found id: ""
	I1210 01:10:58.792363  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.792371  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:58.792377  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:58.792424  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:58.824538  133241 cri.go:89] found id: ""
	I1210 01:10:58.824562  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.824569  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:58.824575  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:58.824626  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:58.859699  133241 cri.go:89] found id: ""
	I1210 01:10:58.859733  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.859745  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:58.859755  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:58.859768  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:58.874557  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:58.874607  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:58.942377  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:58.942399  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:58.942413  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:59.020700  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:59.020743  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:59.092780  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:59.092820  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:58.290069  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:00.290277  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:58.321783  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:00.820779  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:58.256816  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:00.756307  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:01.656942  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:01.670706  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:01.670790  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:01.704182  133241 cri.go:89] found id: ""
	I1210 01:11:01.704222  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.704235  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:01.704242  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:01.704295  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:01.737176  133241 cri.go:89] found id: ""
	I1210 01:11:01.737207  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.737216  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:01.737222  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:01.737279  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:01.771891  133241 cri.go:89] found id: ""
	I1210 01:11:01.771924  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.771935  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:01.771943  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:01.772001  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:01.804964  133241 cri.go:89] found id: ""
	I1210 01:11:01.804994  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.805005  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:01.805026  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:01.805101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:01.837156  133241 cri.go:89] found id: ""
	I1210 01:11:01.837184  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.837195  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:01.837203  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:01.837260  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:01.866759  133241 cri.go:89] found id: ""
	I1210 01:11:01.866783  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.866793  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:01.866802  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:01.866868  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:01.897349  133241 cri.go:89] found id: ""
	I1210 01:11:01.897377  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.897387  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:01.897394  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:01.897452  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:01.928390  133241 cri.go:89] found id: ""
	I1210 01:11:01.928419  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.928430  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:01.928442  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:01.928462  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:01.995531  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:01.995558  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:01.995572  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:02.073144  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:02.073178  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:02.107235  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:02.107266  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:02.159959  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:02.159993  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:02.789938  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:04.790544  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:02.821058  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:04.822126  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:02.756968  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:05.255943  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:07.256779  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:04.672775  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:04.686495  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:04.686604  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:04.720867  133241 cri.go:89] found id: ""
	I1210 01:11:04.720977  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.721005  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:04.721034  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:04.721143  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:04.757796  133241 cri.go:89] found id: ""
	I1210 01:11:04.757823  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.757831  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:04.757837  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:04.757896  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:04.799823  133241 cri.go:89] found id: ""
	I1210 01:11:04.799848  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.799856  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:04.799861  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:04.799921  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:04.848259  133241 cri.go:89] found id: ""
	I1210 01:11:04.848291  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.848303  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:04.848312  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:04.848392  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:04.898530  133241 cri.go:89] found id: ""
	I1210 01:11:04.898583  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.898596  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:04.898605  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:04.898673  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:04.935954  133241 cri.go:89] found id: ""
	I1210 01:11:04.935979  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.935987  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:04.935992  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:04.936037  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:04.970503  133241 cri.go:89] found id: ""
	I1210 01:11:04.970531  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.970538  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:04.970544  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:04.970627  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:05.003257  133241 cri.go:89] found id: ""
	I1210 01:11:05.003280  133241 logs.go:282] 0 containers: []
	W1210 01:11:05.003289  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:05.003298  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:05.003311  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:05.053816  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:05.053849  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:05.066024  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:05.066056  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:05.129515  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:05.129542  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:05.129559  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:05.203823  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:05.203861  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:07.743773  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:07.756948  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:07.757021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:07.790298  133241 cri.go:89] found id: ""
	I1210 01:11:07.790326  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.790334  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:07.790341  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:07.790432  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:07.822653  133241 cri.go:89] found id: ""
	I1210 01:11:07.822682  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.822693  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:07.822700  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:07.822754  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:07.856125  133241 cri.go:89] found id: ""
	I1210 01:11:07.856160  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.856171  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:07.856178  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:07.856247  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:07.888297  133241 cri.go:89] found id: ""
	I1210 01:11:07.888321  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.888329  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:07.888336  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:07.888394  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:07.919131  133241 cri.go:89] found id: ""
	I1210 01:11:07.919159  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.919170  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:07.919177  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:07.919245  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:07.954289  133241 cri.go:89] found id: ""
	I1210 01:11:07.954320  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.954332  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:07.954340  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:07.954396  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:07.985447  133241 cri.go:89] found id: ""
	I1210 01:11:07.985482  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.985497  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:07.985505  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:07.985560  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:08.016461  133241 cri.go:89] found id: ""
	I1210 01:11:08.016491  133241 logs.go:282] 0 containers: []
	W1210 01:11:08.016504  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:08.016516  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:08.016534  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:08.051346  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:08.051386  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:08.101708  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:08.101741  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:08.113883  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:08.113912  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:08.174656  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:08.174681  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:08.174696  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:07.289462  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:09.290707  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:11.790555  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:07.322137  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:09.821004  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:11.821064  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:09.757877  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:12.256156  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:10.751754  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:10.768007  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:10.768071  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:10.814141  133241 cri.go:89] found id: ""
	I1210 01:11:10.814167  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.814177  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:10.814187  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:10.814255  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:10.864355  133241 cri.go:89] found id: ""
	I1210 01:11:10.864379  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.864387  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:10.864392  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:10.864464  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:10.917533  133241 cri.go:89] found id: ""
	I1210 01:11:10.917563  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.917572  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:10.917579  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:10.917644  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:10.949555  133241 cri.go:89] found id: ""
	I1210 01:11:10.949589  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.949601  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:10.949609  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:10.949668  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:10.982997  133241 cri.go:89] found id: ""
	I1210 01:11:10.983022  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.983030  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:10.983036  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:10.983101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:11.016318  133241 cri.go:89] found id: ""
	I1210 01:11:11.016348  133241 logs.go:282] 0 containers: []
	W1210 01:11:11.016359  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:11.016366  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:11.016460  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:11.045980  133241 cri.go:89] found id: ""
	I1210 01:11:11.046004  133241 logs.go:282] 0 containers: []
	W1210 01:11:11.046012  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:11.046018  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:11.046067  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:11.074303  133241 cri.go:89] found id: ""
	I1210 01:11:11.074329  133241 logs.go:282] 0 containers: []
	W1210 01:11:11.074336  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:11.074346  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:11.074357  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:11.108874  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:11.108907  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:11.156642  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:11.156672  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:11.168505  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:11.168527  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:11.239949  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:11.239976  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:11.239994  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:13.828538  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:13.841876  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:13.841929  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:13.872854  133241 cri.go:89] found id: ""
	I1210 01:11:13.872884  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.872896  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:13.872904  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:13.872955  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:13.903759  133241 cri.go:89] found id: ""
	I1210 01:11:13.903790  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.903803  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:13.903812  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:13.903877  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:13.938898  133241 cri.go:89] found id: ""
	I1210 01:11:13.938921  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.938929  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:13.938934  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:13.938992  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:13.979322  133241 cri.go:89] found id: ""
	I1210 01:11:13.979343  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.979351  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:13.979358  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:13.979419  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:14.012959  133241 cri.go:89] found id: ""
	I1210 01:11:14.012984  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.012993  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:14.012999  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:14.013048  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:14.050248  133241 cri.go:89] found id: ""
	I1210 01:11:14.050274  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.050282  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:14.050288  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:14.050337  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:14.086029  133241 cri.go:89] found id: ""
	I1210 01:11:14.086061  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.086072  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:14.086080  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:14.086149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:14.119966  133241 cri.go:89] found id: ""
	I1210 01:11:14.119994  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.120002  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:14.120012  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:14.120025  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:14.133378  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:14.133406  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:14.199060  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:14.199093  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:14.199108  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:14.282056  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:14.282089  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:14.321155  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:14.321182  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:13.790898  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.290292  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:13.821872  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.320917  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:14.257094  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.755448  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.871040  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:16.882350  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:16.882417  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:16.911877  133241 cri.go:89] found id: ""
	I1210 01:11:16.911910  133241 logs.go:282] 0 containers: []
	W1210 01:11:16.911922  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:16.911930  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:16.911993  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:16.946898  133241 cri.go:89] found id: ""
	I1210 01:11:16.946931  133241 logs.go:282] 0 containers: []
	W1210 01:11:16.946945  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:16.946952  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:16.947021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:16.979154  133241 cri.go:89] found id: ""
	I1210 01:11:16.979185  133241 logs.go:282] 0 containers: []
	W1210 01:11:16.979196  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:16.979209  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:16.979293  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:17.008977  133241 cri.go:89] found id: ""
	I1210 01:11:17.009010  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.009021  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:17.009028  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:17.009093  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:17.041399  133241 cri.go:89] found id: ""
	I1210 01:11:17.041431  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.041440  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:17.041446  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:17.041505  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:17.074254  133241 cri.go:89] found id: ""
	I1210 01:11:17.074284  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.074295  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:17.074305  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:17.074385  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:17.104982  133241 cri.go:89] found id: ""
	I1210 01:11:17.105015  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.105025  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:17.105033  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:17.105094  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:17.135240  133241 cri.go:89] found id: ""
	I1210 01:11:17.135265  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.135275  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:17.135286  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:17.135298  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:17.186952  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:17.187004  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:17.201444  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:17.201472  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:17.272210  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:17.272229  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:17.272245  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:17.355218  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:17.355256  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:18.290407  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:20.292289  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:18.321390  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:20.321550  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:18.756823  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:21.256882  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:19.892863  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:19.905069  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:19.905138  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:19.943515  133241 cri.go:89] found id: ""
	I1210 01:11:19.943544  133241 logs.go:282] 0 containers: []
	W1210 01:11:19.943557  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:19.943566  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:19.943629  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:19.974474  133241 cri.go:89] found id: ""
	I1210 01:11:19.974499  133241 logs.go:282] 0 containers: []
	W1210 01:11:19.974509  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:19.974517  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:19.974597  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:20.008980  133241 cri.go:89] found id: ""
	I1210 01:11:20.009011  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.009023  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:20.009030  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:20.009097  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:20.040655  133241 cri.go:89] found id: ""
	I1210 01:11:20.040681  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.040690  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:20.040696  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:20.040745  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:20.073761  133241 cri.go:89] found id: ""
	I1210 01:11:20.073788  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.073799  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:20.073806  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:20.073873  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:20.104381  133241 cri.go:89] found id: ""
	I1210 01:11:20.104410  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.104421  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:20.104429  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:20.104489  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:20.138130  133241 cri.go:89] found id: ""
	I1210 01:11:20.138158  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.138167  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:20.138173  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:20.138229  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:20.166883  133241 cri.go:89] found id: ""
	I1210 01:11:20.166908  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.166916  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:20.166926  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:20.166940  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:20.199437  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:20.199470  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:20.247384  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:20.247418  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:20.260363  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:20.260392  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:20.330260  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:20.330283  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:20.330299  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:22.912818  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:22.925241  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:22.925316  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:22.957975  133241 cri.go:89] found id: ""
	I1210 01:11:22.958003  133241 logs.go:282] 0 containers: []
	W1210 01:11:22.958015  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:22.958023  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:22.958087  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:22.991067  133241 cri.go:89] found id: ""
	I1210 01:11:22.991098  133241 logs.go:282] 0 containers: []
	W1210 01:11:22.991109  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:22.991117  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:22.991177  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:23.022191  133241 cri.go:89] found id: ""
	I1210 01:11:23.022280  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.022297  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:23.022307  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:23.022373  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:23.055399  133241 cri.go:89] found id: ""
	I1210 01:11:23.055427  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.055435  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:23.055440  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:23.055504  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:23.085084  133241 cri.go:89] found id: ""
	I1210 01:11:23.085114  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.085126  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:23.085133  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:23.085195  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:23.114896  133241 cri.go:89] found id: ""
	I1210 01:11:23.114921  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.114929  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:23.114935  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:23.114995  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:23.146419  133241 cri.go:89] found id: ""
	I1210 01:11:23.146450  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.146463  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:23.146470  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:23.146546  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:23.178747  133241 cri.go:89] found id: ""
	I1210 01:11:23.178774  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.178782  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:23.178792  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:23.178804  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:23.230574  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:23.230609  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:23.242622  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:23.242649  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:23.315830  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:23.315850  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:23.315862  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:23.394054  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:23.394091  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:22.790004  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:24.790395  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:26.790583  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:22.821008  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:25.321294  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:23.758460  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:26.257243  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:25.930799  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:25.943287  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:25.943351  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:25.975836  133241 cri.go:89] found id: ""
	I1210 01:11:25.975866  133241 logs.go:282] 0 containers: []
	W1210 01:11:25.975877  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:25.975884  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:25.975948  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:26.008518  133241 cri.go:89] found id: ""
	I1210 01:11:26.008545  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.008553  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:26.008560  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:26.008607  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:26.041953  133241 cri.go:89] found id: ""
	I1210 01:11:26.041992  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.042002  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:26.042009  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:26.042076  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:26.071782  133241 cri.go:89] found id: ""
	I1210 01:11:26.071809  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.071821  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:26.071829  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:26.071894  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:26.101051  133241 cri.go:89] found id: ""
	I1210 01:11:26.101075  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.101084  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:26.101089  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:26.101135  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:26.135274  133241 cri.go:89] found id: ""
	I1210 01:11:26.135300  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.135308  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:26.135315  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:26.135368  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:26.168190  133241 cri.go:89] found id: ""
	I1210 01:11:26.168216  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.168224  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:26.168230  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:26.168293  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:26.198453  133241 cri.go:89] found id: ""
	I1210 01:11:26.198482  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.198492  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:26.198505  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:26.198524  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:26.211436  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:26.211460  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:26.273940  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:26.273964  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:26.273980  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:26.353198  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:26.353232  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:26.389823  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:26.389857  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:28.940375  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:28.952619  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:28.952676  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:28.984886  133241 cri.go:89] found id: ""
	I1210 01:11:28.984914  133241 logs.go:282] 0 containers: []
	W1210 01:11:28.984923  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:28.984929  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:28.984978  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:29.015424  133241 cri.go:89] found id: ""
	I1210 01:11:29.015453  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.015463  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:29.015469  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:29.015520  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:29.045941  133241 cri.go:89] found id: ""
	I1210 01:11:29.045977  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.045989  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:29.045997  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:29.046065  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:29.077346  133241 cri.go:89] found id: ""
	I1210 01:11:29.077375  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.077384  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:29.077389  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:29.077442  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:29.109825  133241 cri.go:89] found id: ""
	I1210 01:11:29.109861  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.109873  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:29.109880  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:29.109946  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:29.141601  133241 cri.go:89] found id: ""
	I1210 01:11:29.141633  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.141645  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:29.141656  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:29.141720  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:29.172711  133241 cri.go:89] found id: ""
	I1210 01:11:29.172747  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.172758  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:29.172766  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:29.172830  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:29.205247  133241 cri.go:89] found id: ""
	I1210 01:11:29.205272  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.205283  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:29.205296  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:29.205310  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:29.255917  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:29.255954  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:29.269246  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:29.269276  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:29.339509  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:29.339535  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:29.339550  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:29.414320  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:29.414358  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:29.291191  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:31.790102  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:27.820810  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:30.321256  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:28.756034  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:30.757633  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:31.950667  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:31.963020  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:31.963083  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:31.994537  133241 cri.go:89] found id: ""
	I1210 01:11:31.994586  133241 logs.go:282] 0 containers: []
	W1210 01:11:31.994598  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:31.994606  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:31.994672  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:32.028601  133241 cri.go:89] found id: ""
	I1210 01:11:32.028632  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.028643  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:32.028651  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:32.028710  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:32.060238  133241 cri.go:89] found id: ""
	I1210 01:11:32.060265  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.060273  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:32.060280  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:32.060344  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:32.094421  133241 cri.go:89] found id: ""
	I1210 01:11:32.094446  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.094454  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:32.094460  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:32.094509  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:32.128237  133241 cri.go:89] found id: ""
	I1210 01:11:32.128266  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.128277  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:32.128285  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:32.128355  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:32.163139  133241 cri.go:89] found id: ""
	I1210 01:11:32.163163  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.163172  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:32.163179  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:32.163237  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:32.194077  133241 cri.go:89] found id: ""
	I1210 01:11:32.194108  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.194119  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:32.194126  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:32.194187  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:32.224914  133241 cri.go:89] found id: ""
	I1210 01:11:32.224941  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.224952  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:32.224964  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:32.224980  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:32.275194  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:32.275230  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:32.287642  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:32.287670  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:32.350922  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:32.350953  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:32.350971  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:32.431573  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:32.431610  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:33.790816  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:35.791330  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:32.321475  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:34.823056  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:33.256524  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:35.755851  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:34.969741  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:34.982487  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:34.982541  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:35.015370  133241 cri.go:89] found id: ""
	I1210 01:11:35.015408  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.015419  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:35.015428  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:35.015494  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:35.047381  133241 cri.go:89] found id: ""
	I1210 01:11:35.047418  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.047430  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:35.047437  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:35.047501  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:35.077282  133241 cri.go:89] found id: ""
	I1210 01:11:35.077305  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.077314  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:35.077320  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:35.077380  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:35.107625  133241 cri.go:89] found id: ""
	I1210 01:11:35.107653  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.107664  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:35.107671  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:35.107723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:35.137919  133241 cri.go:89] found id: ""
	I1210 01:11:35.137949  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.137962  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:35.137970  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:35.138037  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:35.170914  133241 cri.go:89] found id: ""
	I1210 01:11:35.170939  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.170947  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:35.170962  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:35.171021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:35.201719  133241 cri.go:89] found id: ""
	I1210 01:11:35.201747  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.201755  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:35.201761  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:35.201821  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:35.230544  133241 cri.go:89] found id: ""
	I1210 01:11:35.230582  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.230595  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:35.230607  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:35.230622  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:35.243184  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:35.243210  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:35.311888  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:35.311915  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:35.311931  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:35.387377  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:35.387411  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:35.424087  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:35.424121  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:37.977530  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:37.989741  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:37.989811  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:38.023765  133241 cri.go:89] found id: ""
	I1210 01:11:38.023789  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.023799  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:38.023808  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:38.023871  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:38.060456  133241 cri.go:89] found id: ""
	I1210 01:11:38.060487  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.060498  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:38.060505  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:38.060558  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:38.092589  133241 cri.go:89] found id: ""
	I1210 01:11:38.092612  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.092620  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:38.092626  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:38.092676  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:38.126075  133241 cri.go:89] found id: ""
	I1210 01:11:38.126115  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.126127  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:38.126137  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:38.126216  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:38.158861  133241 cri.go:89] found id: ""
	I1210 01:11:38.158892  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.158905  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:38.158911  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:38.158966  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:38.189136  133241 cri.go:89] found id: ""
	I1210 01:11:38.189164  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.189172  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:38.189178  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:38.189227  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:38.220497  133241 cri.go:89] found id: ""
	I1210 01:11:38.220522  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.220530  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:38.220536  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:38.220585  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:38.253480  133241 cri.go:89] found id: ""
	I1210 01:11:38.253515  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.253527  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:38.253539  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:38.253554  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:38.334967  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:38.335006  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:38.375521  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:38.375551  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:38.429375  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:38.429419  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:38.442488  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:38.442527  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:38.504243  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:38.290594  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:40.290705  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:37.322067  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:39.822004  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:37.756517  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:40.256112  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:42.256624  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:41.005015  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:41.018073  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:41.018149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:41.049377  133241 cri.go:89] found id: ""
	I1210 01:11:41.049409  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.049421  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:41.049429  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:41.049495  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:41.080430  133241 cri.go:89] found id: ""
	I1210 01:11:41.080466  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.080476  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:41.080482  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:41.080543  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:41.113179  133241 cri.go:89] found id: ""
	I1210 01:11:41.113210  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.113222  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:41.113229  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:41.113298  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:41.144493  133241 cri.go:89] found id: ""
	I1210 01:11:41.144523  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.144535  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:41.144545  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:41.144612  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:41.174786  133241 cri.go:89] found id: ""
	I1210 01:11:41.174818  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.174828  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:41.174835  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:41.174903  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:41.205010  133241 cri.go:89] found id: ""
	I1210 01:11:41.205050  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.205063  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:41.205072  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:41.205142  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:41.236095  133241 cri.go:89] found id: ""
	I1210 01:11:41.236120  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.236131  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:41.236138  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:41.236200  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:41.267610  133241 cri.go:89] found id: ""
	I1210 01:11:41.267639  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.267654  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:41.267665  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:41.267681  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:41.302639  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:41.302669  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:41.352311  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:41.352343  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:41.365111  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:41.365140  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:41.434174  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:41.434197  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:41.434214  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:44.018219  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:44.030886  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:44.030961  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:44.072932  133241 cri.go:89] found id: ""
	I1210 01:11:44.072954  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.072962  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:44.072968  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:44.073015  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:44.110425  133241 cri.go:89] found id: ""
	I1210 01:11:44.110456  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.110466  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:44.110473  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:44.110539  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:44.148811  133241 cri.go:89] found id: ""
	I1210 01:11:44.148837  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.148848  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:44.148855  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:44.148922  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:44.184181  133241 cri.go:89] found id: ""
	I1210 01:11:44.184205  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.184213  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:44.184219  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:44.184268  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:44.213545  133241 cri.go:89] found id: ""
	I1210 01:11:44.213578  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.213590  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:44.213597  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:44.213658  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:44.246979  133241 cri.go:89] found id: ""
	I1210 01:11:44.247012  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.247024  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:44.247032  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:44.247095  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:44.280902  133241 cri.go:89] found id: ""
	I1210 01:11:44.280939  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.280950  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:44.280958  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:44.281035  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:44.310824  133241 cri.go:89] found id: ""
	I1210 01:11:44.310848  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.310859  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:44.310870  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:44.310887  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:44.389324  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:44.389354  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:44.425351  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:44.425388  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:44.478151  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:44.478197  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:44.491139  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:44.491171  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:44.552150  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:42.790792  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:45.289730  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:42.321108  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:44.321367  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:46.820868  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:44.258348  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:46.756838  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:47.052917  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:47.065698  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:47.065764  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:47.098483  133241 cri.go:89] found id: ""
	I1210 01:11:47.098518  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.098530  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:47.098538  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:47.098617  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:47.129042  133241 cri.go:89] found id: ""
	I1210 01:11:47.129073  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.129082  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:47.129088  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:47.129157  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:47.160050  133241 cri.go:89] found id: ""
	I1210 01:11:47.160083  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.160094  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:47.160101  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:47.160167  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:47.190078  133241 cri.go:89] found id: ""
	I1210 01:11:47.190111  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.190120  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:47.190126  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:47.190180  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:47.218975  133241 cri.go:89] found id: ""
	I1210 01:11:47.219007  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.219020  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:47.219028  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:47.219088  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:47.248644  133241 cri.go:89] found id: ""
	I1210 01:11:47.248679  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.248689  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:47.248694  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:47.248743  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:47.284306  133241 cri.go:89] found id: ""
	I1210 01:11:47.284332  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.284339  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:47.284345  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:47.284394  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:47.314682  133241 cri.go:89] found id: ""
	I1210 01:11:47.314704  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.314712  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:47.314721  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:47.314733  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:47.365334  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:47.365364  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:47.378184  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:47.378215  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:47.445591  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:47.445619  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:47.445642  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:47.523176  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:47.523214  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:47.291212  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:49.790326  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:51.790425  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:48.821947  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:51.321998  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:49.255902  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:51.256638  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:50.059060  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:50.071413  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:50.071489  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:50.104600  133241 cri.go:89] found id: ""
	I1210 01:11:50.104632  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.104644  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:50.104652  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:50.104715  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:50.136915  133241 cri.go:89] found id: ""
	I1210 01:11:50.136947  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.136957  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:50.136968  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:50.137038  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:50.172552  133241 cri.go:89] found id: ""
	I1210 01:11:50.172582  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.172593  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:50.172604  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:50.172668  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:50.202583  133241 cri.go:89] found id: ""
	I1210 01:11:50.202613  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.202626  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:50.202634  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:50.202696  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:50.232446  133241 cri.go:89] found id: ""
	I1210 01:11:50.232473  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.232483  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:50.232491  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:50.232555  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:50.271296  133241 cri.go:89] found id: ""
	I1210 01:11:50.271321  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.271332  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:50.271340  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:50.271404  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:50.304185  133241 cri.go:89] found id: ""
	I1210 01:11:50.304216  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.304227  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:50.304235  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:50.304298  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:50.338004  133241 cri.go:89] found id: ""
	I1210 01:11:50.338030  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.338041  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:50.338051  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:50.338066  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:50.374374  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:50.374403  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:50.427315  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:50.427346  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:50.439862  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:50.439890  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:50.505410  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:50.505441  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:50.505458  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:53.081065  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:53.093760  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:53.093816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:53.126125  133241 cri.go:89] found id: ""
	I1210 01:11:53.126160  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.126172  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:53.126180  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:53.126252  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:53.157694  133241 cri.go:89] found id: ""
	I1210 01:11:53.157719  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.157727  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:53.157732  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:53.157787  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:53.188784  133241 cri.go:89] found id: ""
	I1210 01:11:53.188812  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.188820  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:53.188826  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:53.188882  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:53.220025  133241 cri.go:89] found id: ""
	I1210 01:11:53.220056  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.220066  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:53.220074  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:53.220133  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:53.254601  133241 cri.go:89] found id: ""
	I1210 01:11:53.254632  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.254641  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:53.254649  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:53.254718  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:53.286858  133241 cri.go:89] found id: ""
	I1210 01:11:53.286896  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.286906  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:53.286917  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:53.286979  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:53.322063  133241 cri.go:89] found id: ""
	I1210 01:11:53.322087  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.322096  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:53.322104  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:53.322175  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:53.353598  133241 cri.go:89] found id: ""
	I1210 01:11:53.353624  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.353632  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:53.353641  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:53.353653  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:53.400634  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:53.400660  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:53.412838  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:53.412870  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:53.475152  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:53.475176  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:53.475191  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:53.551193  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:53.551236  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:54.290077  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:56.290911  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:53.322201  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:55.821982  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:53.257982  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:55.756075  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:56.089703  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:56.102065  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:56.102158  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:56.137385  133241 cri.go:89] found id: ""
	I1210 01:11:56.137410  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.137418  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:56.137424  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:56.137489  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:56.173717  133241 cri.go:89] found id: ""
	I1210 01:11:56.173748  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.173756  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:56.173762  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:56.173823  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:56.209007  133241 cri.go:89] found id: ""
	I1210 01:11:56.209031  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.209038  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:56.209044  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:56.209106  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:56.247599  133241 cri.go:89] found id: ""
	I1210 01:11:56.247628  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.247636  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:56.247642  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:56.247701  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:56.279510  133241 cri.go:89] found id: ""
	I1210 01:11:56.279535  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.279544  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:56.279550  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:56.279600  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:56.311644  133241 cri.go:89] found id: ""
	I1210 01:11:56.311665  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.311672  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:56.311678  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:56.311722  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:56.343277  133241 cri.go:89] found id: ""
	I1210 01:11:56.343306  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.343317  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:56.343324  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:56.343384  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:56.396352  133241 cri.go:89] found id: ""
	I1210 01:11:56.396380  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.396388  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:56.396397  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:56.396409  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:56.408726  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:56.408754  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:56.483943  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:56.483970  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:56.483987  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:56.566841  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:56.566874  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:56.604048  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:56.604083  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:59.154979  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:59.167727  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:59.167803  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:59.198861  133241 cri.go:89] found id: ""
	I1210 01:11:59.198886  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.198894  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:59.198901  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:59.198953  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:59.232900  133241 cri.go:89] found id: ""
	I1210 01:11:59.232935  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.232947  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:59.232955  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:59.233024  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:59.267532  133241 cri.go:89] found id: ""
	I1210 01:11:59.267558  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.267566  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:59.267571  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:59.267633  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:59.298091  133241 cri.go:89] found id: ""
	I1210 01:11:59.298120  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.298130  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:59.298140  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:59.298199  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:59.327848  133241 cri.go:89] found id: ""
	I1210 01:11:59.327879  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.327889  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:59.327897  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:59.327957  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:59.356570  133241 cri.go:89] found id: ""
	I1210 01:11:59.356601  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.356617  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:59.356626  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:59.356686  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:59.387756  133241 cri.go:89] found id: ""
	I1210 01:11:59.387780  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.387788  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:59.387793  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:59.387843  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:59.419836  133241 cri.go:89] found id: ""
	I1210 01:11:59.419869  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.419878  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:59.419887  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:59.419902  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:59.469663  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:59.469697  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:59.482738  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:59.482768  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:59.548687  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:59.548717  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:59.548739  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:58.790282  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:01.290379  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:58.320794  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:00.821991  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:57.756197  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:00.256511  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:59.625772  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:59.625809  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:02.163527  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:02.175510  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:02.175569  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:02.209432  133241 cri.go:89] found id: ""
	I1210 01:12:02.209462  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.209474  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:02.209481  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:02.209535  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:02.241027  133241 cri.go:89] found id: ""
	I1210 01:12:02.241050  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.241059  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:02.241064  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:02.241113  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:02.272251  133241 cri.go:89] found id: ""
	I1210 01:12:02.272277  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.272286  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:02.272293  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:02.272355  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:02.305879  133241 cri.go:89] found id: ""
	I1210 01:12:02.305903  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.305913  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:02.305920  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:02.305978  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:02.339219  133241 cri.go:89] found id: ""
	I1210 01:12:02.339248  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.339263  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:02.339271  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:02.339333  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:02.375203  133241 cri.go:89] found id: ""
	I1210 01:12:02.375240  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.375252  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:02.375260  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:02.375326  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:02.406364  133241 cri.go:89] found id: ""
	I1210 01:12:02.406396  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.406406  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:02.406413  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:02.406472  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:02.441572  133241 cri.go:89] found id: ""
	I1210 01:12:02.441602  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.441614  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:02.441627  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:02.441642  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:02.454215  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:02.454241  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:02.526345  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:02.526368  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:02.526388  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:02.603813  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:02.603845  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:02.640102  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:02.640136  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:03.291135  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.792322  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:03.321084  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.322066  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:02.755961  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.256774  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.189319  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:05.201957  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:05.202022  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:05.242211  133241 cri.go:89] found id: ""
	I1210 01:12:05.242238  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.242247  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:05.242253  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:05.242300  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:05.277287  133241 cri.go:89] found id: ""
	I1210 01:12:05.277309  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.277317  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:05.277323  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:05.277382  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:05.309455  133241 cri.go:89] found id: ""
	I1210 01:12:05.309480  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.309488  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:05.309493  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:05.309540  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:05.344117  133241 cri.go:89] found id: ""
	I1210 01:12:05.344143  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.344156  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:05.344164  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:05.344222  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:05.375039  133241 cri.go:89] found id: ""
	I1210 01:12:05.375067  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.375079  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:05.375086  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:05.375146  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:05.407623  133241 cri.go:89] found id: ""
	I1210 01:12:05.407649  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.407657  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:05.407665  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:05.407723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:05.441018  133241 cri.go:89] found id: ""
	I1210 01:12:05.441047  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.441055  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:05.441061  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:05.441123  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:05.471864  133241 cri.go:89] found id: ""
	I1210 01:12:05.471895  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.471907  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:05.471918  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:05.471931  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:05.536855  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:05.536881  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:05.536896  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:05.617577  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:05.617612  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:05.654150  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:05.654188  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:05.707690  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:05.707720  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:08.220391  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:08.232904  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:08.232961  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:08.271892  133241 cri.go:89] found id: ""
	I1210 01:12:08.271921  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.271933  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:08.271939  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:08.272004  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:08.304534  133241 cri.go:89] found id: ""
	I1210 01:12:08.304556  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.304563  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:08.304569  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:08.304620  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:08.338410  133241 cri.go:89] found id: ""
	I1210 01:12:08.338441  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.338451  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:08.338459  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:08.338523  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:08.370412  133241 cri.go:89] found id: ""
	I1210 01:12:08.370438  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.370449  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:08.370456  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:08.370515  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:08.401137  133241 cri.go:89] found id: ""
	I1210 01:12:08.401161  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.401169  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:08.401175  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:08.401224  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:08.436185  133241 cri.go:89] found id: ""
	I1210 01:12:08.436220  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.436232  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:08.436241  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:08.436308  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:08.468648  133241 cri.go:89] found id: ""
	I1210 01:12:08.468677  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.468696  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:08.468704  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:08.468764  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:08.506817  133241 cri.go:89] found id: ""
	I1210 01:12:08.506852  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.506865  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:08.506878  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:08.506898  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:08.565209  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:08.565240  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:08.581630  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:08.581675  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:08.663163  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:08.663189  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:08.663201  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:08.744843  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:08.744888  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:08.290806  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:10.790414  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:07.821280  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:09.821710  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:07.755386  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:09.759064  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:12.256087  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:11.282449  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:11.295381  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:11.295443  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:11.328119  133241 cri.go:89] found id: ""
	I1210 01:12:11.328145  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.328156  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:11.328162  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:11.328215  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:11.360864  133241 cri.go:89] found id: ""
	I1210 01:12:11.360895  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.360906  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:11.360914  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:11.360979  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:11.394838  133241 cri.go:89] found id: ""
	I1210 01:12:11.394862  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.394871  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:11.394876  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:11.394928  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:11.424174  133241 cri.go:89] found id: ""
	I1210 01:12:11.424216  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.424228  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:11.424236  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:11.424295  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:11.455057  133241 cri.go:89] found id: ""
	I1210 01:12:11.455083  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.455095  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:11.455102  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:11.455173  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:11.485755  133241 cri.go:89] found id: ""
	I1210 01:12:11.485783  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.485791  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:11.485797  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:11.485850  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:11.516921  133241 cri.go:89] found id: ""
	I1210 01:12:11.516952  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.516963  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:11.516970  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:11.517029  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:11.547484  133241 cri.go:89] found id: ""
	I1210 01:12:11.547510  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.547518  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:11.547527  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:11.547540  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:11.582392  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:11.582419  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:11.635271  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:11.635306  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:11.647460  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:11.647492  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:11.713562  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:11.713584  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:11.713599  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:14.299112  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:14.314813  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:14.314886  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:14.365870  133241 cri.go:89] found id: ""
	I1210 01:12:14.365907  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.365925  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:14.365934  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:14.365998  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:14.399023  133241 cri.go:89] found id: ""
	I1210 01:12:14.399046  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.399054  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:14.399060  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:14.399106  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:14.432464  133241 cri.go:89] found id: ""
	I1210 01:12:14.432490  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.432498  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:14.432504  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:14.432559  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:14.462625  133241 cri.go:89] found id: ""
	I1210 01:12:14.462657  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.462668  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:14.462675  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:14.462723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:14.494853  133241 cri.go:89] found id: ""
	I1210 01:12:14.494884  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.494895  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:14.494903  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:14.494968  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:14.528863  133241 cri.go:89] found id: ""
	I1210 01:12:14.528898  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.528909  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:14.528917  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:14.528985  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:14.563527  133241 cri.go:89] found id: ""
	I1210 01:12:14.563557  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.563568  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:14.563575  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:14.563633  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:14.592383  133241 cri.go:89] found id: ""
	I1210 01:12:14.592419  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.592429  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:14.592440  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:14.592453  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:14.604471  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:14.604498  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:12:12.790681  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:15.289761  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:12.321375  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:14.321765  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:16.820568  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:14.256568  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:16.755323  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	W1210 01:12:14.671647  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:14.671673  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:14.671686  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:14.749619  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:14.749648  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:14.783668  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:14.783700  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:17.337203  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:17.349666  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:17.349726  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:17.380558  133241 cri.go:89] found id: ""
	I1210 01:12:17.380584  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.380595  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:17.380603  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:17.380663  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:17.413026  133241 cri.go:89] found id: ""
	I1210 01:12:17.413060  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.413072  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:17.413080  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:17.413149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:17.444972  133241 cri.go:89] found id: ""
	I1210 01:12:17.445003  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.445014  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:17.445022  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:17.445081  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:17.477555  133241 cri.go:89] found id: ""
	I1210 01:12:17.477580  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.477588  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:17.477594  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:17.477641  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:17.508550  133241 cri.go:89] found id: ""
	I1210 01:12:17.508574  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.508582  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:17.508588  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:17.508671  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:17.538537  133241 cri.go:89] found id: ""
	I1210 01:12:17.538574  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.538586  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:17.538594  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:17.538655  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:17.571816  133241 cri.go:89] found id: ""
	I1210 01:12:17.571843  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.571851  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:17.571859  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:17.571916  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:17.602437  133241 cri.go:89] found id: ""
	I1210 01:12:17.602465  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.602484  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:17.602502  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:17.602517  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:17.652904  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:17.652936  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:17.664983  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:17.665006  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:17.732580  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:17.732606  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:17.732622  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:17.813561  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:17.813598  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:17.290624  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:19.291031  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:21.790058  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:18.821021  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:20.821538  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:18.755611  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:20.756570  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:20.349846  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:20.361680  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:20.361816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:20.394316  133241 cri.go:89] found id: ""
	I1210 01:12:20.394338  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.394345  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:20.394350  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:20.394395  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:20.432172  133241 cri.go:89] found id: ""
	I1210 01:12:20.432196  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.432204  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:20.432209  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:20.432256  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:20.464019  133241 cri.go:89] found id: ""
	I1210 01:12:20.464042  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.464049  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:20.464055  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:20.464101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:20.496239  133241 cri.go:89] found id: ""
	I1210 01:12:20.496264  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.496271  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:20.496277  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:20.496325  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:20.527890  133241 cri.go:89] found id: ""
	I1210 01:12:20.527920  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.527932  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:20.527939  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:20.527996  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:20.558333  133241 cri.go:89] found id: ""
	I1210 01:12:20.558360  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.558368  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:20.558374  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:20.558425  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:20.589431  133241 cri.go:89] found id: ""
	I1210 01:12:20.589461  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.589472  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:20.589480  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:20.589542  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:20.618988  133241 cri.go:89] found id: ""
	I1210 01:12:20.619018  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.619032  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:20.619042  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:20.619056  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:20.669620  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:20.669648  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:20.681405  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:20.681428  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:20.745196  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:20.745226  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:20.745243  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:20.823522  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:20.823548  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:23.360499  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:23.373249  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:23.373315  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:23.405186  133241 cri.go:89] found id: ""
	I1210 01:12:23.405207  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.405215  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:23.405224  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:23.405269  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:23.440082  133241 cri.go:89] found id: ""
	I1210 01:12:23.440118  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.440138  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:23.440146  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:23.440217  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:23.473962  133241 cri.go:89] found id: ""
	I1210 01:12:23.473991  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.474001  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:23.474010  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:23.474066  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:23.505004  133241 cri.go:89] found id: ""
	I1210 01:12:23.505028  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.505036  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:23.505042  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:23.505087  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:23.539383  133241 cri.go:89] found id: ""
	I1210 01:12:23.539416  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.539427  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:23.539435  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:23.539502  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:23.569371  133241 cri.go:89] found id: ""
	I1210 01:12:23.569402  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.569412  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:23.569420  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:23.569487  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:23.599718  133241 cri.go:89] found id: ""
	I1210 01:12:23.599740  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.599748  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:23.599754  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:23.599798  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:23.633483  133241 cri.go:89] found id: ""
	I1210 01:12:23.633513  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.633527  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:23.633538  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:23.633572  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:23.645791  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:23.645814  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:23.706819  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:23.706842  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:23.706858  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:23.792257  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:23.792283  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:23.832356  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:23.832384  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:23.790991  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:26.289467  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:23.321221  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:25.321373  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:23.256427  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:25.256459  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:27.257652  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:26.383157  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:26.395778  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:26.395834  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:26.428709  133241 cri.go:89] found id: ""
	I1210 01:12:26.428738  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.428750  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:26.428758  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:26.428823  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:26.463421  133241 cri.go:89] found id: ""
	I1210 01:12:26.463451  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.463470  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:26.463479  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:26.463541  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:26.494783  133241 cri.go:89] found id: ""
	I1210 01:12:26.494813  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.494826  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:26.494834  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:26.494894  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:26.524395  133241 cri.go:89] found id: ""
	I1210 01:12:26.524423  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.524434  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:26.524442  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:26.524505  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:26.554102  133241 cri.go:89] found id: ""
	I1210 01:12:26.554135  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.554146  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:26.554153  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:26.554218  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:26.584091  133241 cri.go:89] found id: ""
	I1210 01:12:26.584119  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.584127  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:26.584133  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:26.584188  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:26.618194  133241 cri.go:89] found id: ""
	I1210 01:12:26.618221  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.618229  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:26.618234  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:26.618282  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:26.652597  133241 cri.go:89] found id: ""
	I1210 01:12:26.652632  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.652643  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:26.652657  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:26.652674  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:26.724236  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:26.724262  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:26.724277  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:26.802706  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:26.802745  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:26.851153  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:26.851184  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:26.902459  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:26.902489  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:29.415298  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:29.428093  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:29.428168  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:29.460651  133241 cri.go:89] found id: ""
	I1210 01:12:29.460678  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.460686  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:29.460692  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:29.460745  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:29.490971  133241 cri.go:89] found id: ""
	I1210 01:12:29.491000  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.491009  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:29.491015  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:29.491064  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:29.521465  133241 cri.go:89] found id: ""
	I1210 01:12:29.521496  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.521509  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:29.521517  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:29.521592  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:29.555709  133241 cri.go:89] found id: ""
	I1210 01:12:29.555736  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.555744  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:29.555750  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:29.555812  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:29.589891  133241 cri.go:89] found id: ""
	I1210 01:12:29.589918  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.589928  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:29.589935  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:29.590006  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:29.620929  133241 cri.go:89] found id: ""
	I1210 01:12:29.620959  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.620989  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:29.620998  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:29.621060  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:28.290708  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:30.290750  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:27.822436  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:30.320877  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:29.756698  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:31.756872  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:29.652297  133241 cri.go:89] found id: ""
	I1210 01:12:29.652322  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.652332  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:29.652339  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:29.652400  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:29.685881  133241 cri.go:89] found id: ""
	I1210 01:12:29.685904  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.685912  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:29.685922  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:29.685936  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:29.734856  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:29.734889  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:29.747270  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:29.747297  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:29.811253  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:29.811276  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:29.811292  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:29.888151  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:29.888187  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:32.425743  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:32.438647  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:32.438723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:32.477466  133241 cri.go:89] found id: ""
	I1210 01:12:32.477489  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.477498  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:32.477503  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:32.477553  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:32.509698  133241 cri.go:89] found id: ""
	I1210 01:12:32.509732  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.509746  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:32.509753  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:32.509811  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:32.540873  133241 cri.go:89] found id: ""
	I1210 01:12:32.540903  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.540911  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:32.540919  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:32.540981  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:32.571143  133241 cri.go:89] found id: ""
	I1210 01:12:32.571168  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.571179  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:32.571186  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:32.571253  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:32.604797  133241 cri.go:89] found id: ""
	I1210 01:12:32.604829  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.604839  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:32.604847  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:32.604902  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:32.640179  133241 cri.go:89] found id: ""
	I1210 01:12:32.640204  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.640212  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:32.640218  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:32.640265  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:32.671103  133241 cri.go:89] found id: ""
	I1210 01:12:32.671130  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.671138  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:32.671144  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:32.671195  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:32.709038  133241 cri.go:89] found id: ""
	I1210 01:12:32.709069  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.709080  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:32.709092  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:32.709107  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:32.764933  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:32.764963  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:32.777149  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:32.777172  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:32.842233  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:32.842256  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:32.842273  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:32.923533  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:32.923569  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:32.291302  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:34.790708  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:32.321782  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:34.821161  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:36.821244  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:34.256937  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:36.756894  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:35.462284  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:35.476392  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:35.476465  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:35.509483  133241 cri.go:89] found id: ""
	I1210 01:12:35.509507  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.509515  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:35.509521  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:35.509568  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:35.546324  133241 cri.go:89] found id: ""
	I1210 01:12:35.546357  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.546369  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:35.546385  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:35.546457  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:35.580578  133241 cri.go:89] found id: ""
	I1210 01:12:35.580608  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.580618  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:35.580626  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:35.580695  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:35.613220  133241 cri.go:89] found id: ""
	I1210 01:12:35.613245  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.613253  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:35.613259  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:35.613318  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:35.650713  133241 cri.go:89] found id: ""
	I1210 01:12:35.650741  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.650751  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:35.650757  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:35.650826  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:35.685084  133241 cri.go:89] found id: ""
	I1210 01:12:35.685121  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.685134  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:35.685141  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:35.685196  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:35.717092  133241 cri.go:89] found id: ""
	I1210 01:12:35.717118  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.717130  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:35.717141  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:35.717197  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:35.753691  133241 cri.go:89] found id: ""
	I1210 01:12:35.753722  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.753732  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:35.753751  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:35.753766  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:35.807280  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:35.807314  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:35.821862  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:35.821894  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:35.892640  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:35.892667  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:35.892684  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:35.967250  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:35.967291  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:38.505643  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:38.518703  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:38.518762  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:38.554866  133241 cri.go:89] found id: ""
	I1210 01:12:38.554904  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.554917  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:38.554926  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:38.554983  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:38.586725  133241 cri.go:89] found id: ""
	I1210 01:12:38.586757  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.586770  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:38.586779  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:38.586840  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:38.617766  133241 cri.go:89] found id: ""
	I1210 01:12:38.617791  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.617799  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:38.617804  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:38.617855  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:38.647743  133241 cri.go:89] found id: ""
	I1210 01:12:38.647770  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.647779  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:38.647785  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:38.647838  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:38.680523  133241 cri.go:89] found id: ""
	I1210 01:12:38.680553  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.680564  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:38.680572  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:38.680634  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:38.714271  133241 cri.go:89] found id: ""
	I1210 01:12:38.714299  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.714307  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:38.714314  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:38.714366  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:38.751180  133241 cri.go:89] found id: ""
	I1210 01:12:38.751213  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.751226  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:38.751235  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:38.751307  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:38.783754  133241 cri.go:89] found id: ""
	I1210 01:12:38.783778  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.783787  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:38.783796  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:38.783807  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:38.843285  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:38.843332  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:38.856901  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:38.856935  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:38.923720  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:38.923747  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:38.923764  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:39.002855  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:39.002898  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:37.290816  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:38.785325  132693 pod_ready.go:82] duration metric: took 4m0.000828619s for pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace to be "Ready" ...
	E1210 01:12:38.785348  132693 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 01:12:38.785371  132693 pod_ready.go:39] duration metric: took 4m7.530994938s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:12:38.785436  132693 kubeadm.go:597] duration metric: took 4m15.56153133s to restartPrimaryControlPlane
	W1210 01:12:38.785555  132693 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 01:12:38.785612  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:12:38.822192  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:41.321407  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:39.256018  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:41.256892  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:41.542152  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:41.556438  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:41.556517  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:41.587666  133241 cri.go:89] found id: ""
	I1210 01:12:41.587695  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.587706  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:41.587714  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:41.587772  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:41.620472  133241 cri.go:89] found id: ""
	I1210 01:12:41.620498  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.620506  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:41.620512  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:41.620568  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:41.653153  133241 cri.go:89] found id: ""
	I1210 01:12:41.653196  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.653209  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:41.653217  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:41.653275  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:41.685358  133241 cri.go:89] found id: ""
	I1210 01:12:41.685387  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.685395  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:41.685401  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:41.685459  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:41.715972  133241 cri.go:89] found id: ""
	I1210 01:12:41.715996  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.716004  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:41.716010  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:41.716058  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:41.750651  133241 cri.go:89] found id: ""
	I1210 01:12:41.750684  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.750695  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:41.750703  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:41.750781  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:41.788845  133241 cri.go:89] found id: ""
	I1210 01:12:41.788872  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.788882  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:41.788890  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:41.788953  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:41.821679  133241 cri.go:89] found id: ""
	I1210 01:12:41.821705  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.821716  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:41.821726  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:41.821741  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:41.873177  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:41.873207  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:41.885639  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:41.885663  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:41.954882  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:41.954906  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:41.954922  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:42.032868  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:42.032911  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:44.569896  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:44.582137  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:44.582239  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:44.613216  133241 cri.go:89] found id: ""
	I1210 01:12:44.613245  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.613255  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:44.613264  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:44.613326  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:43.820651  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:45.821203  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:43.755681  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:45.755860  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:44.642860  133241 cri.go:89] found id: ""
	I1210 01:12:44.642887  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.642897  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:44.642904  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:44.642961  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:44.675879  133241 cri.go:89] found id: ""
	I1210 01:12:44.675908  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.675920  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:44.675928  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:44.675992  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:44.705466  133241 cri.go:89] found id: ""
	I1210 01:12:44.705490  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.705499  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:44.705505  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:44.705552  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:44.740999  133241 cri.go:89] found id: ""
	I1210 01:12:44.741029  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.741038  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:44.741043  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:44.741101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:44.774933  133241 cri.go:89] found id: ""
	I1210 01:12:44.774963  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.774974  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:44.774981  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:44.775044  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:44.806061  133241 cri.go:89] found id: ""
	I1210 01:12:44.806085  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.806093  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:44.806100  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:44.806163  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:44.837759  133241 cri.go:89] found id: ""
	I1210 01:12:44.837781  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.837789  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:44.837797  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:44.837808  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:44.872830  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:44.872881  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:44.925476  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:44.925505  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:44.937814  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:44.937838  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:45.012002  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:45.012029  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:45.012046  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:47.589735  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:47.603668  133241 kubeadm.go:597] duration metric: took 4m3.306612686s to restartPrimaryControlPlane
	W1210 01:12:47.603739  133241 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 01:12:47.603761  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:12:48.154198  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:12:48.167608  133241 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:12:48.176803  133241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:12:48.185508  133241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:12:48.185527  133241 kubeadm.go:157] found existing configuration files:
	
	I1210 01:12:48.185572  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:12:48.193940  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:12:48.193992  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:12:48.202384  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:12:48.210626  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:12:48.210672  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:12:48.219377  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:12:48.227459  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:12:48.227493  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:12:48.235967  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:12:48.244142  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:12:48.244177  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:12:48.252961  133241 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:12:48.323011  133241 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 01:12:48.323104  133241 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:12:48.458259  133241 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:12:48.458424  133241 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:12:48.458536  133241 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 01:12:48.630626  133241 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:12:48.632393  133241 out.go:235]   - Generating certificates and keys ...
	I1210 01:12:48.632510  133241 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:12:48.632611  133241 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:12:48.633714  133241 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:12:48.633800  133241 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:12:48.633862  133241 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:12:48.633957  133241 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:12:48.634058  133241 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:12:48.634151  133241 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:12:48.634265  133241 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:12:48.634426  133241 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:12:48.634546  133241 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:12:48.634640  133241 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:12:48.756866  133241 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:12:48.885589  133241 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:12:49.551602  133241 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:12:49.667812  133241 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:12:49.683125  133241 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:12:49.684322  133241 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:12:49.684390  133241 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:12:49.830086  133241 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:12:48.322646  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:50.821218  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:47.756532  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:49.757416  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:52.256110  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:49.831618  133241 out.go:235]   - Booting up control plane ...
	I1210 01:12:49.831733  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:12:49.836164  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:12:49.837117  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:12:49.845538  133241 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:12:49.848331  133241 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 01:12:53.320607  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:55.321218  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:54.256922  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:56.755279  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:57.321409  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:59.321826  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:01.821159  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:58.757281  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:01.256065  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:05.297959  132693 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.512320802s)
	I1210 01:13:05.298031  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:13:05.321593  132693 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:13:05.334072  132693 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:13:05.346063  132693 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:13:05.346089  132693 kubeadm.go:157] found existing configuration files:
	
	I1210 01:13:05.346143  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:13:05.360019  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:13:05.360087  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:13:05.372583  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:13:05.384130  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:13:05.384188  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:13:05.392629  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:13:05.400642  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:13:05.400700  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:13:05.410803  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:13:05.419350  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:13:05.419390  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:13:05.429452  132693 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:13:05.481014  132693 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 01:13:05.481092  132693 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:13:05.597528  132693 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:13:05.597654  132693 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:13:05.597756  132693 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 01:13:05.612251  132693 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:13:05.613988  132693 out.go:235]   - Generating certificates and keys ...
	I1210 01:13:05.614052  132693 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:13:05.614111  132693 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:13:05.614207  132693 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:13:05.614297  132693 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:13:05.614409  132693 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:13:05.614477  132693 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:13:05.614568  132693 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:13:05.614645  132693 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:13:05.614739  132693 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:13:05.614860  132693 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:13:05.614923  132693 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:13:05.615007  132693 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:13:05.946241  132693 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:13:06.262996  132693 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 01:13:06.492684  132693 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:13:06.618787  132693 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:13:06.805590  132693 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:13:06.806311  132693 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:13:06.808813  132693 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:13:06.810481  132693 out.go:235]   - Booting up control plane ...
	I1210 01:13:06.810631  132693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:13:06.810746  132693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:13:06.810812  132693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:13:03.821406  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:05.821749  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:03.756325  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:06.257324  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:06.832919  132693 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:13:06.839052  132693 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:13:06.839096  132693 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:13:06.969474  132693 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 01:13:06.969623  132693 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 01:13:07.971413  132693 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001911774s
	I1210 01:13:07.971493  132693 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 01:13:07.822174  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:09.822828  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:12.473566  132693 kubeadm.go:310] [api-check] The API server is healthy after 4.502020736s
	I1210 01:13:12.487877  132693 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 01:13:12.501570  132693 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 01:13:12.529568  132693 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 01:13:12.529808  132693 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-274758 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 01:13:12.539578  132693 kubeadm.go:310] [bootstrap-token] Using token: tq1yzs.mz19z1mkmh869v39
	I1210 01:13:08.757580  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:11.256597  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:12.540687  132693 out.go:235]   - Configuring RBAC rules ...
	I1210 01:13:12.540830  132693 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 01:13:12.546018  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 01:13:12.554335  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 01:13:12.557480  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 01:13:12.562006  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 01:13:12.568058  132693 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 01:13:12.880502  132693 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 01:13:13.367386  132693 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 01:13:13.879413  132693 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 01:13:13.880417  132693 kubeadm.go:310] 
	I1210 01:13:13.880519  132693 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 01:13:13.880541  132693 kubeadm.go:310] 
	I1210 01:13:13.880619  132693 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 01:13:13.880629  132693 kubeadm.go:310] 
	I1210 01:13:13.880662  132693 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 01:13:13.880741  132693 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 01:13:13.880829  132693 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 01:13:13.880851  132693 kubeadm.go:310] 
	I1210 01:13:13.880930  132693 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 01:13:13.880943  132693 kubeadm.go:310] 
	I1210 01:13:13.881016  132693 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 01:13:13.881029  132693 kubeadm.go:310] 
	I1210 01:13:13.881114  132693 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 01:13:13.881255  132693 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 01:13:13.881326  132693 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 01:13:13.881334  132693 kubeadm.go:310] 
	I1210 01:13:13.881429  132693 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 01:13:13.881542  132693 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 01:13:13.881553  132693 kubeadm.go:310] 
	I1210 01:13:13.881680  132693 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tq1yzs.mz19z1mkmh869v39 \
	I1210 01:13:13.881815  132693 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 \
	I1210 01:13:13.881843  132693 kubeadm.go:310] 	--control-plane 
	I1210 01:13:13.881854  132693 kubeadm.go:310] 
	I1210 01:13:13.881973  132693 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 01:13:13.881982  132693 kubeadm.go:310] 
	I1210 01:13:13.882072  132693 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tq1yzs.mz19z1mkmh869v39 \
	I1210 01:13:13.882230  132693 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 
	I1210 01:13:13.883146  132693 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:13:13.883196  132693 cni.go:84] Creating CNI manager for ""
	I1210 01:13:13.883217  132693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:13:13.885371  132693 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:13:13.886543  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:13:13.897482  132693 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:13:13.915107  132693 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 01:13:13.915244  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:13.915242  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-274758 minikube.k8s.io/updated_at=2024_12_10T01_13_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=embed-certs-274758 minikube.k8s.io/primary=true
	I1210 01:13:13.928635  132693 ops.go:34] apiserver oom_adj: -16
	I1210 01:13:14.131983  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:14.633015  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:15.132113  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:15.632347  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:16.132367  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:16.632749  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:12.321479  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:14.321663  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:16.822120  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:13.756549  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:15.751204  133282 pod_ready.go:82] duration metric: took 4m0.000700419s for pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace to be "Ready" ...
	E1210 01:13:15.751234  133282 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 01:13:15.751259  133282 pod_ready.go:39] duration metric: took 4m6.019142998s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:15.751290  133282 kubeadm.go:597] duration metric: took 4m13.842336769s to restartPrimaryControlPlane
	W1210 01:13:15.751381  133282 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 01:13:15.751413  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:13:17.132359  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:17.632050  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:18.132263  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:18.225462  132693 kubeadm.go:1113] duration metric: took 4.310260508s to wait for elevateKubeSystemPrivileges
	I1210 01:13:18.225504  132693 kubeadm.go:394] duration metric: took 4m55.046897812s to StartCluster
	I1210 01:13:18.225547  132693 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:18.225650  132693 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:13:18.227523  132693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:18.227776  132693 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 01:13:18.227852  132693 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 01:13:18.227928  132693 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-274758"
	I1210 01:13:18.227962  132693 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-274758"
	I1210 01:13:18.227961  132693 addons.go:69] Setting default-storageclass=true in profile "embed-certs-274758"
	I1210 01:13:18.227999  132693 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-274758"
	I1210 01:13:18.228012  132693 config.go:182] Loaded profile config "embed-certs-274758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1210 01:13:18.227973  132693 addons.go:243] addon storage-provisioner should already be in state true
	I1210 01:13:18.227983  132693 addons.go:69] Setting metrics-server=true in profile "embed-certs-274758"
	I1210 01:13:18.228079  132693 addons.go:234] Setting addon metrics-server=true in "embed-certs-274758"
	W1210 01:13:18.228096  132693 addons.go:243] addon metrics-server should already be in state true
	I1210 01:13:18.228130  132693 host.go:66] Checking if "embed-certs-274758" exists ...
	I1210 01:13:18.228085  132693 host.go:66] Checking if "embed-certs-274758" exists ...
	I1210 01:13:18.228468  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.228508  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.228521  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.228554  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.228608  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.228660  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.229260  132693 out.go:177] * Verifying Kubernetes components...
	I1210 01:13:18.230643  132693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:13:18.244916  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45913
	I1210 01:13:18.245098  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I1210 01:13:18.245389  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.245571  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.246186  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.246210  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.246288  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43885
	I1210 01:13:18.246344  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.246364  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.246598  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.246769  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.246771  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.246825  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.247215  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.247242  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.247367  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.247418  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.247638  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.248206  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.248244  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.250542  132693 addons.go:234] Setting addon default-storageclass=true in "embed-certs-274758"
	W1210 01:13:18.250579  132693 addons.go:243] addon default-storageclass should already be in state true
	I1210 01:13:18.250614  132693 host.go:66] Checking if "embed-certs-274758" exists ...
	I1210 01:13:18.250951  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.250999  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.265194  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36375
	I1210 01:13:18.265779  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43577
	I1210 01:13:18.266283  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.266478  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.267212  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.267234  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.267302  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40845
	I1210 01:13:18.267316  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.267329  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.267647  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.267700  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.268228  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.268248  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.268250  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.268276  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.268319  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.268679  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.268889  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.269065  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.271273  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:13:18.271495  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:13:18.272879  132693 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:13:18.272898  132693 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 01:13:18.274238  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 01:13:18.274260  132693 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 01:13:18.274279  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:13:18.274371  132693 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:18.274394  132693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 01:13:18.274411  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:13:18.278685  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.279199  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:13:18.279245  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.279405  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:13:18.279557  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:13:18.279684  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:13:18.279823  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:13:18.280345  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.281064  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:13:18.281083  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:13:18.281095  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.281282  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:13:18.281455  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:13:18.281643  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:13:18.285915  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36787
	I1210 01:13:18.286306  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.286727  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.286745  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.287055  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.287234  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.288732  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:13:18.288930  132693 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:18.288945  132693 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 01:13:18.288962  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:13:18.291528  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.291801  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:13:18.291821  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.291990  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:13:18.292175  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:13:18.292303  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:13:18.292532  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:13:18.426704  132693 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:13:18.454857  132693 node_ready.go:35] waiting up to 6m0s for node "embed-certs-274758" to be "Ready" ...
	I1210 01:13:18.470552  132693 node_ready.go:49] node "embed-certs-274758" has status "Ready":"True"
	I1210 01:13:18.470590  132693 node_ready.go:38] duration metric: took 15.702625ms for node "embed-certs-274758" to be "Ready" ...
	I1210 01:13:18.470604  132693 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:18.480748  132693 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bgjgh" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:18.569014  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 01:13:18.569040  132693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 01:13:18.605108  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 01:13:18.605137  132693 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 01:13:18.606158  132693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:18.614827  132693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:18.647542  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:18.647573  132693 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 01:13:18.726060  132693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:19.536876  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.536905  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.536988  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.537020  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.537177  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537215  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.537223  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.537234  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.537239  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537252  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.537261  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.537269  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.537324  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.537524  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537623  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.537922  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.537957  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537981  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.556234  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.556255  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.556555  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.556567  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.556572  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.977786  132693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.251679295s)
	I1210 01:13:19.977848  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.977861  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.978210  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.978227  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.978253  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.978288  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.978297  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.978536  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.978557  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.978581  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.978593  132693 addons.go:475] Verifying addon metrics-server=true in "embed-certs-274758"
	I1210 01:13:19.980096  132693 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 01:13:19.981147  132693 addons.go:510] duration metric: took 1.753302974s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 01:13:20.487221  132693 pod_ready.go:93] pod "coredns-7c65d6cfc9-bgjgh" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:20.487244  132693 pod_ready.go:82] duration metric: took 2.006464893s for pod "coredns-7c65d6cfc9-bgjgh" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:20.487253  132693 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:18.822687  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:21.322845  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:22.493358  132693 pod_ready.go:103] pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:24.993203  132693 pod_ready.go:103] pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:25.492646  132693 pod_ready.go:93] pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.492669  132693 pod_ready.go:82] duration metric: took 5.005410057s for pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.492679  132693 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.497102  132693 pod_ready.go:93] pod "etcd-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.497119  132693 pod_ready.go:82] duration metric: took 4.434391ms for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.497126  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.501166  132693 pod_ready.go:93] pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.501181  132693 pod_ready.go:82] duration metric: took 4.048875ms for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.501189  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.505541  132693 pod_ready.go:93] pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.505565  132693 pod_ready.go:82] duration metric: took 4.369889ms for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.505579  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-v28mz" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.509548  132693 pod_ready.go:93] pod "kube-proxy-v28mz" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.509562  132693 pod_ready.go:82] duration metric: took 3.977138ms for pod "kube-proxy-v28mz" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.509568  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:23.322966  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:25.820854  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:27.517005  132693 pod_ready.go:93] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:27.517027  132693 pod_ready.go:82] duration metric: took 2.007452032s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:27.517035  132693 pod_ready.go:39] duration metric: took 9.046411107s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:27.517052  132693 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:13:27.517101  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:13:27.531721  132693 api_server.go:72] duration metric: took 9.303907779s to wait for apiserver process to appear ...
	I1210 01:13:27.531750  132693 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:13:27.531768  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:13:27.536509  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 200:
	ok
	I1210 01:13:27.537428  132693 api_server.go:141] control plane version: v1.31.2
	I1210 01:13:27.537448  132693 api_server.go:131] duration metric: took 5.691563ms to wait for apiserver health ...
	I1210 01:13:27.537462  132693 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:13:27.693218  132693 system_pods.go:59] 9 kube-system pods found
	I1210 01:13:27.693251  132693 system_pods.go:61] "coredns-7c65d6cfc9-bgjgh" [277d23ef-ff20-414d-beb6-c6982712a423] Running
	I1210 01:13:27.693257  132693 system_pods.go:61] "coredns-7c65d6cfc9-m4qgb" [41253d1b-c010-41e2-9286-e9930025e9ff] Running
	I1210 01:13:27.693265  132693 system_pods.go:61] "etcd-embed-certs-274758" [b0c51f0d-a638-4f4f-b3df-95c7794facc9] Running
	I1210 01:13:27.693269  132693 system_pods.go:61] "kube-apiserver-embed-certs-274758" [ecdc5ff5-4d07-4802-98b8-8382132f7748] Running
	I1210 01:13:27.693273  132693 system_pods.go:61] "kube-controller-manager-embed-certs-274758" [e42fce81-4a93-498b-8897-31387873d181] Running
	I1210 01:13:27.693276  132693 system_pods.go:61] "kube-proxy-v28mz" [5cd47cc1-a085-4e77-850d-dde0c8ed6054] Running
	I1210 01:13:27.693279  132693 system_pods.go:61] "kube-scheduler-embed-certs-274758" [609e1329-cd71-4772-96cc-24b3620c511d] Running
	I1210 01:13:27.693285  132693 system_pods.go:61] "metrics-server-6867b74b74-mcw2c" [a7b75933-124c-4577-b26a-ad1c5c128910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:13:27.693289  132693 system_pods.go:61] "storage-provisioner" [71e4d38f-b0fe-43cf-a844-ba787287fda6] Running
	I1210 01:13:27.693296  132693 system_pods.go:74] duration metric: took 155.828167ms to wait for pod list to return data ...
	I1210 01:13:27.693305  132693 default_sa.go:34] waiting for default service account to be created ...
	I1210 01:13:27.891018  132693 default_sa.go:45] found service account: "default"
	I1210 01:13:27.891046  132693 default_sa.go:55] duration metric: took 197.731166ms for default service account to be created ...
	I1210 01:13:27.891055  132693 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 01:13:28.095967  132693 system_pods.go:86] 9 kube-system pods found
	I1210 01:13:28.095996  132693 system_pods.go:89] "coredns-7c65d6cfc9-bgjgh" [277d23ef-ff20-414d-beb6-c6982712a423] Running
	I1210 01:13:28.096002  132693 system_pods.go:89] "coredns-7c65d6cfc9-m4qgb" [41253d1b-c010-41e2-9286-e9930025e9ff] Running
	I1210 01:13:28.096006  132693 system_pods.go:89] "etcd-embed-certs-274758" [b0c51f0d-a638-4f4f-b3df-95c7794facc9] Running
	I1210 01:13:28.096010  132693 system_pods.go:89] "kube-apiserver-embed-certs-274758" [ecdc5ff5-4d07-4802-98b8-8382132f7748] Running
	I1210 01:13:28.096014  132693 system_pods.go:89] "kube-controller-manager-embed-certs-274758" [e42fce81-4a93-498b-8897-31387873d181] Running
	I1210 01:13:28.096017  132693 system_pods.go:89] "kube-proxy-v28mz" [5cd47cc1-a085-4e77-850d-dde0c8ed6054] Running
	I1210 01:13:28.096021  132693 system_pods.go:89] "kube-scheduler-embed-certs-274758" [609e1329-cd71-4772-96cc-24b3620c511d] Running
	I1210 01:13:28.096027  132693 system_pods.go:89] "metrics-server-6867b74b74-mcw2c" [a7b75933-124c-4577-b26a-ad1c5c128910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:13:28.096031  132693 system_pods.go:89] "storage-provisioner" [71e4d38f-b0fe-43cf-a844-ba787287fda6] Running
	I1210 01:13:28.096039  132693 system_pods.go:126] duration metric: took 204.97831ms to wait for k8s-apps to be running ...
	I1210 01:13:28.096047  132693 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 01:13:28.096091  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:13:28.109766  132693 system_svc.go:56] duration metric: took 13.710817ms WaitForService to wait for kubelet
	I1210 01:13:28.109807  132693 kubeadm.go:582] duration metric: took 9.881998931s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:13:28.109831  132693 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:13:28.290402  132693 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:13:28.290444  132693 node_conditions.go:123] node cpu capacity is 2
	I1210 01:13:28.290457  132693 node_conditions.go:105] duration metric: took 180.620817ms to run NodePressure ...
	I1210 01:13:28.290472  132693 start.go:241] waiting for startup goroutines ...
	I1210 01:13:28.290478  132693 start.go:246] waiting for cluster config update ...
	I1210 01:13:28.290489  132693 start.go:255] writing updated cluster config ...
	I1210 01:13:28.290756  132693 ssh_runner.go:195] Run: rm -f paused
	I1210 01:13:28.341573  132693 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 01:13:28.343695  132693 out.go:177] * Done! kubectl is now configured to use "embed-certs-274758" cluster and "default" namespace by default
	I1210 01:13:28.321957  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:30.821091  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:29.849672  133241 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 01:13:29.850163  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:13:29.850412  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:13:33.322460  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:35.822120  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:34.850843  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:13:34.851064  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:13:38.321590  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:40.322421  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:41.903973  133282 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.152536348s)
	I1210 01:13:41.904058  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:13:41.922104  133282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:13:41.932781  133282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:13:41.949147  133282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:13:41.949169  133282 kubeadm.go:157] found existing configuration files:
	
	I1210 01:13:41.949234  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 01:13:41.961475  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:13:41.961531  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:13:41.973790  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 01:13:41.985658  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:13:41.985718  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:13:41.996851  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 01:13:42.005612  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:13:42.005661  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:13:42.016316  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 01:13:42.025097  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:13:42.025162  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:13:42.035841  133282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:13:42.204343  133282 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:13:42.820637  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:44.821863  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:46.822010  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:44.851525  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:13:44.851699  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:13:50.610797  133282 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 01:13:50.610879  133282 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:13:50.610976  133282 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:13:50.611138  133282 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:13:50.611235  133282 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 01:13:50.611363  133282 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:13:50.612870  133282 out.go:235]   - Generating certificates and keys ...
	I1210 01:13:50.612937  133282 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:13:50.612990  133282 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:13:50.613065  133282 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:13:50.613142  133282 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:13:50.613213  133282 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:13:50.613291  133282 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:13:50.613383  133282 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:13:50.613468  133282 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:13:50.613583  133282 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:13:50.613711  133282 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:13:50.613784  133282 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:13:50.613871  133282 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:13:50.613951  133282 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:13:50.614035  133282 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 01:13:50.614113  133282 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:13:50.614231  133282 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:13:50.614318  133282 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:13:50.614396  133282 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:13:50.614483  133282 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:13:50.615840  133282 out.go:235]   - Booting up control plane ...
	I1210 01:13:50.615917  133282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:13:50.615985  133282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:13:50.616068  133282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:13:50.616186  133282 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:13:50.616283  133282 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:13:50.616354  133282 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:13:50.616529  133282 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 01:13:50.616677  133282 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 01:13:50.616752  133282 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002388771s
	I1210 01:13:50.616858  133282 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 01:13:50.616942  133282 kubeadm.go:310] [api-check] The API server is healthy after 4.501731998s
	I1210 01:13:50.617063  133282 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 01:13:50.617214  133282 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 01:13:50.617302  133282 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 01:13:50.617556  133282 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-901295 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 01:13:50.617633  133282 kubeadm.go:310] [bootstrap-token] Using token: qm0b8q.vohlzpntqihfsj2x
	I1210 01:13:50.618774  133282 out.go:235]   - Configuring RBAC rules ...
	I1210 01:13:50.618896  133282 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 01:13:50.619001  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 01:13:50.619167  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 01:13:50.619286  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 01:13:50.619432  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 01:13:50.619563  133282 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 01:13:50.619724  133282 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 01:13:50.619788  133282 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 01:13:50.619855  133282 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 01:13:50.619865  133282 kubeadm.go:310] 
	I1210 01:13:50.619958  133282 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 01:13:50.619970  133282 kubeadm.go:310] 
	I1210 01:13:50.620071  133282 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 01:13:50.620084  133282 kubeadm.go:310] 
	I1210 01:13:50.620133  133282 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 01:13:50.620214  133282 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 01:13:50.620290  133282 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 01:13:50.620299  133282 kubeadm.go:310] 
	I1210 01:13:50.620384  133282 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 01:13:50.620393  133282 kubeadm.go:310] 
	I1210 01:13:50.620464  133282 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 01:13:50.620480  133282 kubeadm.go:310] 
	I1210 01:13:50.620554  133282 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 01:13:50.620656  133282 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 01:13:50.620747  133282 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 01:13:50.620756  133282 kubeadm.go:310] 
	I1210 01:13:50.620862  133282 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 01:13:50.620978  133282 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 01:13:50.620994  133282 kubeadm.go:310] 
	I1210 01:13:50.621111  133282 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token qm0b8q.vohlzpntqihfsj2x \
	I1210 01:13:50.621255  133282 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 \
	I1210 01:13:50.621286  133282 kubeadm.go:310] 	--control-plane 
	I1210 01:13:50.621296  133282 kubeadm.go:310] 
	I1210 01:13:50.621365  133282 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 01:13:50.621374  133282 kubeadm.go:310] 
	I1210 01:13:50.621448  133282 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token qm0b8q.vohlzpntqihfsj2x \
	I1210 01:13:50.621569  133282 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 
	I1210 01:13:50.621593  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:13:50.621608  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:13:50.622943  133282 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:13:49.321854  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:51.815742  132605 pod_ready.go:82] duration metric: took 4m0.000382174s for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	E1210 01:13:51.815774  132605 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1210 01:13:51.815787  132605 pod_ready.go:39] duration metric: took 4m2.800798949s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:51.815811  132605 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:13:51.815854  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:13:51.815920  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:13:51.865972  132605 cri.go:89] found id: "0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:51.866004  132605 cri.go:89] found id: ""
	I1210 01:13:51.866015  132605 logs.go:282] 1 containers: [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c]
	I1210 01:13:51.866098  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.871589  132605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:13:51.871648  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:13:51.909231  132605 cri.go:89] found id: "bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:51.909256  132605 cri.go:89] found id: ""
	I1210 01:13:51.909266  132605 logs.go:282] 1 containers: [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490]
	I1210 01:13:51.909321  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.913562  132605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:13:51.913639  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:13:51.946623  132605 cri.go:89] found id: "7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:51.946651  132605 cri.go:89] found id: ""
	I1210 01:13:51.946661  132605 logs.go:282] 1 containers: [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04]
	I1210 01:13:51.946721  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.950686  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:13:51.950756  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:13:51.988821  132605 cri.go:89] found id: "c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:51.988845  132605 cri.go:89] found id: ""
	I1210 01:13:51.988856  132605 logs.go:282] 1 containers: [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692]
	I1210 01:13:51.988916  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.992776  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:13:51.992827  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:13:52.028882  132605 cri.go:89] found id: "eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:52.028910  132605 cri.go:89] found id: ""
	I1210 01:13:52.028920  132605 logs.go:282] 1 containers: [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77]
	I1210 01:13:52.028974  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.033384  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:13:52.033467  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:13:52.068002  132605 cri.go:89] found id: "7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:52.068030  132605 cri.go:89] found id: ""
	I1210 01:13:52.068038  132605 logs.go:282] 1 containers: [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f]
	I1210 01:13:52.068086  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.071868  132605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:13:52.071938  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:13:52.105726  132605 cri.go:89] found id: ""
	I1210 01:13:52.105751  132605 logs.go:282] 0 containers: []
	W1210 01:13:52.105760  132605 logs.go:284] No container was found matching "kindnet"
	I1210 01:13:52.105767  132605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 01:13:52.105822  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 01:13:52.146662  132605 cri.go:89] found id: "8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:52.146690  132605 cri.go:89] found id: "abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:52.146696  132605 cri.go:89] found id: ""
	I1210 01:13:52.146706  132605 logs.go:282] 2 containers: [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0]
	I1210 01:13:52.146769  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.150459  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.153921  132605 logs.go:123] Gathering logs for kube-proxy [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77] ...
	I1210 01:13:52.153942  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:52.197327  132605 logs.go:123] Gathering logs for kube-controller-manager [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f] ...
	I1210 01:13:52.197354  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:50.624049  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:13:50.634300  133282 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:13:50.650835  133282 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 01:13:50.650955  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:50.650957  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-901295 minikube.k8s.io/updated_at=2024_12_10T01_13_50_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=default-k8s-diff-port-901295 minikube.k8s.io/primary=true
	I1210 01:13:50.661855  133282 ops.go:34] apiserver oom_adj: -16
	I1210 01:13:50.846244  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:51.347288  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:51.846690  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:52.346721  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:52.846891  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:53.346360  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:53.846284  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:54.346480  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:54.846394  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:54.950848  133282 kubeadm.go:1113] duration metric: took 4.299939675s to wait for elevateKubeSystemPrivileges
	I1210 01:13:54.950893  133282 kubeadm.go:394] duration metric: took 4m53.095365109s to StartCluster
	I1210 01:13:54.950920  133282 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:54.951018  133282 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:13:54.952642  133282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:54.952903  133282 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 01:13:54.953028  133282 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 01:13:54.953103  133282 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-901295"
	I1210 01:13:54.953122  133282 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-901295"
	W1210 01:13:54.953130  133282 addons.go:243] addon storage-provisioner should already be in state true
	I1210 01:13:54.953144  133282 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-901295"
	I1210 01:13:54.953165  133282 host.go:66] Checking if "default-k8s-diff-port-901295" exists ...
	I1210 01:13:54.953165  133282 config.go:182] Loaded profile config "default-k8s-diff-port-901295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:13:54.953164  133282 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-901295"
	I1210 01:13:54.953175  133282 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-901295"
	I1210 01:13:54.953188  133282 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-901295"
	W1210 01:13:54.953197  133282 addons.go:243] addon metrics-server should already be in state true
	I1210 01:13:54.953236  133282 host.go:66] Checking if "default-k8s-diff-port-901295" exists ...
	I1210 01:13:54.953502  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.953544  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.953604  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.953648  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.953611  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.953720  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.954470  133282 out.go:177] * Verifying Kubernetes components...
	I1210 01:13:54.955825  133282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:13:54.969471  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43591
	I1210 01:13:54.969539  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42601
	I1210 01:13:54.969905  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.969971  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.970407  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.970427  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.970539  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.970606  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.970834  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.970902  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.971282  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.971314  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.971457  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.971503  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.971615  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33149
	I1210 01:13:54.971975  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.972424  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.972451  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.972757  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.972939  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:54.976290  133282 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-901295"
	W1210 01:13:54.976313  133282 addons.go:243] addon default-storageclass should already be in state true
	I1210 01:13:54.976344  133282 host.go:66] Checking if "default-k8s-diff-port-901295" exists ...
	I1210 01:13:54.976701  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.976743  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.987931  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35367
	I1210 01:13:54.988409  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.988950  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.988975  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.989395  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.989602  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:54.990179  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35279
	I1210 01:13:54.990660  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.991231  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.991256  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.991553  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:13:54.991804  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.991988  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:54.993375  133282 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 01:13:54.993895  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:13:54.993895  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38115
	I1210 01:13:54.994363  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.994661  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 01:13:54.994675  133282 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 01:13:54.994690  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:13:54.994864  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.994882  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.995298  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.995379  133282 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:13:54.995834  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.995881  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.996682  133282 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:54.996704  133282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 01:13:54.996721  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:13:55.000015  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000319  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:13:55.000343  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000361  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000321  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:13:55.000540  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:13:55.000637  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:13:55.000658  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000689  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:13:55.000819  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:13:55.000955  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:13:55.001529  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:13:55.001896  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:13:55.002167  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:13:55.013310  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45905
	I1210 01:13:55.013700  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:55.014199  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:55.014219  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:55.014556  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:55.014997  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:55.016445  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:13:55.016626  133282 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:55.016642  133282 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 01:13:55.016659  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:13:55.018941  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.019337  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:13:55.019358  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.019578  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:13:55.019718  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:13:55.019807  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:13:55.019887  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:13:55.152197  133282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:13:55.175962  133282 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-901295" to be "Ready" ...
	I1210 01:13:55.185748  133282 node_ready.go:49] node "default-k8s-diff-port-901295" has status "Ready":"True"
	I1210 01:13:55.185767  133282 node_ready.go:38] duration metric: took 9.765238ms for node "default-k8s-diff-port-901295" to be "Ready" ...
	I1210 01:13:55.185776  133282 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:55.193102  133282 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:55.268186  133282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:55.294420  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 01:13:55.294451  133282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 01:13:55.326324  133282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:55.338979  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 01:13:55.339009  133282 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 01:13:55.393682  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:55.393713  133282 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 01:13:55.482637  133282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:56.131482  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.131574  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.131524  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.131650  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.132095  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Closing plugin on server side
	I1210 01:13:56.132112  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132129  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.132133  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132138  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.132140  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.132148  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.132149  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.132207  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.132384  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132397  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.132501  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Closing plugin on server side
	I1210 01:13:56.132565  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132579  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.155188  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.155211  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.155515  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.155535  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.795811  133282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.313113399s)
	I1210 01:13:56.795879  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.795895  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.796326  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Closing plugin on server side
	I1210 01:13:56.796327  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.796353  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.796367  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.796379  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.796612  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.796628  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.796641  133282 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-901295"
	I1210 01:13:56.798189  133282 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 01:13:52.256305  132605 logs.go:123] Gathering logs for dmesg ...
	I1210 01:13:52.256333  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:13:52.269263  132605 logs.go:123] Gathering logs for kube-apiserver [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c] ...
	I1210 01:13:52.269288  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:52.310821  132605 logs.go:123] Gathering logs for coredns [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04] ...
	I1210 01:13:52.310855  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:52.348176  132605 logs.go:123] Gathering logs for kube-scheduler [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692] ...
	I1210 01:13:52.348204  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:52.399357  132605 logs.go:123] Gathering logs for storage-provisioner [abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0] ...
	I1210 01:13:52.399392  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:52.436240  132605 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:13:52.436272  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:13:52.962153  132605 logs.go:123] Gathering logs for container status ...
	I1210 01:13:52.962192  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:13:53.010091  132605 logs.go:123] Gathering logs for kubelet ...
	I1210 01:13:53.010127  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:13:53.082183  132605 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:13:53.082218  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:13:53.201521  132605 logs.go:123] Gathering logs for etcd [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490] ...
	I1210 01:13:53.201557  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:53.243675  132605 logs.go:123] Gathering logs for storage-provisioner [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6] ...
	I1210 01:13:53.243711  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:55.779907  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:13:55.796284  132605 api_server.go:72] duration metric: took 4m14.500959712s to wait for apiserver process to appear ...
	I1210 01:13:55.796314  132605 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:13:55.796358  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:13:55.796431  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:13:55.839067  132605 cri.go:89] found id: "0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:55.839098  132605 cri.go:89] found id: ""
	I1210 01:13:55.839107  132605 logs.go:282] 1 containers: [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c]
	I1210 01:13:55.839175  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.843310  132605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:13:55.843382  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:13:55.875863  132605 cri.go:89] found id: "bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:55.875888  132605 cri.go:89] found id: ""
	I1210 01:13:55.875896  132605 logs.go:282] 1 containers: [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490]
	I1210 01:13:55.875960  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.879748  132605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:13:55.879819  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:13:55.911243  132605 cri.go:89] found id: "7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:55.911269  132605 cri.go:89] found id: ""
	I1210 01:13:55.911279  132605 logs.go:282] 1 containers: [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04]
	I1210 01:13:55.911342  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.915201  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:13:55.915268  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:13:55.966280  132605 cri.go:89] found id: "c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:55.966308  132605 cri.go:89] found id: ""
	I1210 01:13:55.966318  132605 logs.go:282] 1 containers: [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692]
	I1210 01:13:55.966384  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.970278  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:13:55.970354  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:13:56.004675  132605 cri.go:89] found id: "eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:56.004706  132605 cri.go:89] found id: ""
	I1210 01:13:56.004722  132605 logs.go:282] 1 containers: [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77]
	I1210 01:13:56.004785  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.008534  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:13:56.008614  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:13:56.051252  132605 cri.go:89] found id: "7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:56.051282  132605 cri.go:89] found id: ""
	I1210 01:13:56.051293  132605 logs.go:282] 1 containers: [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f]
	I1210 01:13:56.051356  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.055160  132605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:13:56.055243  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:13:56.100629  132605 cri.go:89] found id: ""
	I1210 01:13:56.100660  132605 logs.go:282] 0 containers: []
	W1210 01:13:56.100672  132605 logs.go:284] No container was found matching "kindnet"
	I1210 01:13:56.100681  132605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 01:13:56.100749  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 01:13:56.140250  132605 cri.go:89] found id: "8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:56.140274  132605 cri.go:89] found id: "abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:56.140280  132605 cri.go:89] found id: ""
	I1210 01:13:56.140290  132605 logs.go:282] 2 containers: [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0]
	I1210 01:13:56.140352  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.145225  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.150128  132605 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:13:56.150151  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:13:56.273696  132605 logs.go:123] Gathering logs for kube-apiserver [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c] ...
	I1210 01:13:56.273730  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:56.323851  132605 logs.go:123] Gathering logs for etcd [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490] ...
	I1210 01:13:56.323884  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:56.375726  132605 logs.go:123] Gathering logs for kube-scheduler [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692] ...
	I1210 01:13:56.375763  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:56.430544  132605 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:13:56.430587  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:13:56.866412  132605 logs.go:123] Gathering logs for storage-provisioner [abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0] ...
	I1210 01:13:56.866505  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:56.901321  132605 logs.go:123] Gathering logs for container status ...
	I1210 01:13:56.901360  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:13:56.940068  132605 logs.go:123] Gathering logs for kubelet ...
	I1210 01:13:56.940107  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:13:57.010688  132605 logs.go:123] Gathering logs for dmesg ...
	I1210 01:13:57.010725  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:13:57.025463  132605 logs.go:123] Gathering logs for coredns [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04] ...
	I1210 01:13:57.025514  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:57.063908  132605 logs.go:123] Gathering logs for kube-proxy [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77] ...
	I1210 01:13:57.063939  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:57.102140  132605 logs.go:123] Gathering logs for kube-controller-manager [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f] ...
	I1210 01:13:57.102182  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:57.154429  132605 logs.go:123] Gathering logs for storage-provisioner [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6] ...
	I1210 01:13:57.154467  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:56.799397  133282 addons.go:510] duration metric: took 1.846376359s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 01:13:57.200860  133282 pod_ready.go:103] pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:59.697834  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:13:59.702097  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 200:
	ok
	I1210 01:13:59.703338  132605 api_server.go:141] control plane version: v1.31.2
	I1210 01:13:59.703360  132605 api_server.go:131] duration metric: took 3.907039005s to wait for apiserver health ...
	I1210 01:13:59.703368  132605 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:13:59.703389  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:13:59.703430  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:13:59.746795  132605 cri.go:89] found id: "0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:59.746815  132605 cri.go:89] found id: ""
	I1210 01:13:59.746822  132605 logs.go:282] 1 containers: [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c]
	I1210 01:13:59.746867  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.750673  132605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:13:59.750736  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:13:59.783121  132605 cri.go:89] found id: "bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:59.783154  132605 cri.go:89] found id: ""
	I1210 01:13:59.783163  132605 logs.go:282] 1 containers: [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490]
	I1210 01:13:59.783210  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.786822  132605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:13:59.786875  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:13:59.819075  132605 cri.go:89] found id: "7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:59.819096  132605 cri.go:89] found id: ""
	I1210 01:13:59.819103  132605 logs.go:282] 1 containers: [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04]
	I1210 01:13:59.819163  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.822836  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:13:59.822886  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:13:59.859388  132605 cri.go:89] found id: "c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:59.859418  132605 cri.go:89] found id: ""
	I1210 01:13:59.859428  132605 logs.go:282] 1 containers: [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692]
	I1210 01:13:59.859482  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.863388  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:13:59.863447  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:13:59.897967  132605 cri.go:89] found id: "eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:59.897987  132605 cri.go:89] found id: ""
	I1210 01:13:59.897994  132605 logs.go:282] 1 containers: [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77]
	I1210 01:13:59.898037  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.902198  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:13:59.902262  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:13:59.935685  132605 cri.go:89] found id: "7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:59.935713  132605 cri.go:89] found id: ""
	I1210 01:13:59.935724  132605 logs.go:282] 1 containers: [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f]
	I1210 01:13:59.935782  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.939600  132605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:13:59.939653  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:13:59.975763  132605 cri.go:89] found id: ""
	I1210 01:13:59.975797  132605 logs.go:282] 0 containers: []
	W1210 01:13:59.975810  132605 logs.go:284] No container was found matching "kindnet"
	I1210 01:13:59.975819  132605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 01:13:59.975886  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 01:14:00.014470  132605 cri.go:89] found id: "8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:14:00.014500  132605 cri.go:89] found id: "abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:14:00.014506  132605 cri.go:89] found id: ""
	I1210 01:14:00.014515  132605 logs.go:282] 2 containers: [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0]
	I1210 01:14:00.014589  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:14:00.018470  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:14:00.022628  132605 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:14:00.022650  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:14:00.126253  132605 logs.go:123] Gathering logs for kube-apiserver [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c] ...
	I1210 01:14:00.126280  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:14:00.168377  132605 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:14:00.168410  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:14:00.554305  132605 logs.go:123] Gathering logs for container status ...
	I1210 01:14:00.554349  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:14:00.597646  132605 logs.go:123] Gathering logs for kube-scheduler [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692] ...
	I1210 01:14:00.597673  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:14:00.638356  132605 logs.go:123] Gathering logs for kube-proxy [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77] ...
	I1210 01:14:00.638385  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:14:00.673027  132605 logs.go:123] Gathering logs for kube-controller-manager [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f] ...
	I1210 01:14:00.673058  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:14:00.736632  132605 logs.go:123] Gathering logs for storage-provisioner [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6] ...
	I1210 01:14:00.736667  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:14:00.771609  132605 logs.go:123] Gathering logs for kubelet ...
	I1210 01:14:00.771643  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:14:00.838511  132605 logs.go:123] Gathering logs for dmesg ...
	I1210 01:14:00.838542  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:14:00.853873  132605 logs.go:123] Gathering logs for etcd [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490] ...
	I1210 01:14:00.853901  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:14:00.903386  132605 logs.go:123] Gathering logs for coredns [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04] ...
	I1210 01:14:00.903417  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:14:00.940479  132605 logs.go:123] Gathering logs for storage-provisioner [abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0] ...
	I1210 01:14:00.940538  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:59.199815  133282 pod_ready.go:93] pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:59.199838  133282 pod_ready.go:82] duration metric: took 4.006706604s for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:59.199848  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:01.206809  133282 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:14:02.205417  133282 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:02.205439  133282 pod_ready.go:82] duration metric: took 3.005584799s for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:02.205449  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:03.479747  132605 system_pods.go:59] 8 kube-system pods found
	I1210 01:14:03.479776  132605 system_pods.go:61] "coredns-7c65d6cfc9-hhsm5" [dddb227a-7c16-4acd-be5f-1ab38b78129c] Running
	I1210 01:14:03.479781  132605 system_pods.go:61] "etcd-no-preload-584179" [acba2a13-196f-4ff9-8151-6b88578d532d] Running
	I1210 01:14:03.479785  132605 system_pods.go:61] "kube-apiserver-no-preload-584179" [20a67076-4ff2-4f31-b245-bf4079cd11d1] Running
	I1210 01:14:03.479789  132605 system_pods.go:61] "kube-controller-manager-no-preload-584179" [d7a2e531-1609-4c8a-b756-d9819339ed27] Running
	I1210 01:14:03.479791  132605 system_pods.go:61] "kube-proxy-xcjs2" [ec6cf5b1-3ea9-4868-874d-61e262cca0c5] Running
	I1210 01:14:03.479795  132605 system_pods.go:61] "kube-scheduler-no-preload-584179" [998543da-3056-4960-8762-9ab3dbd1925a] Running
	I1210 01:14:03.479800  132605 system_pods.go:61] "metrics-server-6867b74b74-lwgxd" [0e7f1063-8508-4f5b-b8ff-bbd387a53919] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:03.479804  132605 system_pods.go:61] "storage-provisioner" [31180637-f48e-4dda-8ec3-56155bb300cf] Running
	I1210 01:14:03.479813  132605 system_pods.go:74] duration metric: took 3.776438741s to wait for pod list to return data ...
	I1210 01:14:03.479820  132605 default_sa.go:34] waiting for default service account to be created ...
	I1210 01:14:03.482188  132605 default_sa.go:45] found service account: "default"
	I1210 01:14:03.482210  132605 default_sa.go:55] duration metric: took 2.383945ms for default service account to be created ...
	I1210 01:14:03.482218  132605 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 01:14:03.487172  132605 system_pods.go:86] 8 kube-system pods found
	I1210 01:14:03.487199  132605 system_pods.go:89] "coredns-7c65d6cfc9-hhsm5" [dddb227a-7c16-4acd-be5f-1ab38b78129c] Running
	I1210 01:14:03.487213  132605 system_pods.go:89] "etcd-no-preload-584179" [acba2a13-196f-4ff9-8151-6b88578d532d] Running
	I1210 01:14:03.487220  132605 system_pods.go:89] "kube-apiserver-no-preload-584179" [20a67076-4ff2-4f31-b245-bf4079cd11d1] Running
	I1210 01:14:03.487227  132605 system_pods.go:89] "kube-controller-manager-no-preload-584179" [d7a2e531-1609-4c8a-b756-d9819339ed27] Running
	I1210 01:14:03.487232  132605 system_pods.go:89] "kube-proxy-xcjs2" [ec6cf5b1-3ea9-4868-874d-61e262cca0c5] Running
	I1210 01:14:03.487239  132605 system_pods.go:89] "kube-scheduler-no-preload-584179" [998543da-3056-4960-8762-9ab3dbd1925a] Running
	I1210 01:14:03.487248  132605 system_pods.go:89] "metrics-server-6867b74b74-lwgxd" [0e7f1063-8508-4f5b-b8ff-bbd387a53919] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:03.487257  132605 system_pods.go:89] "storage-provisioner" [31180637-f48e-4dda-8ec3-56155bb300cf] Running
	I1210 01:14:03.487267  132605 system_pods.go:126] duration metric: took 5.043223ms to wait for k8s-apps to be running ...
	I1210 01:14:03.487278  132605 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 01:14:03.487331  132605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:14:03.503494  132605 system_svc.go:56] duration metric: took 16.208072ms WaitForService to wait for kubelet
	I1210 01:14:03.503520  132605 kubeadm.go:582] duration metric: took 4m22.208203921s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:14:03.503535  132605 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:14:03.506148  132605 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:14:03.506168  132605 node_conditions.go:123] node cpu capacity is 2
	I1210 01:14:03.506181  132605 node_conditions.go:105] duration metric: took 2.641093ms to run NodePressure ...
	I1210 01:14:03.506196  132605 start.go:241] waiting for startup goroutines ...
	I1210 01:14:03.506209  132605 start.go:246] waiting for cluster config update ...
	I1210 01:14:03.506228  132605 start.go:255] writing updated cluster config ...
	I1210 01:14:03.506542  132605 ssh_runner.go:195] Run: rm -f paused
	I1210 01:14:03.552082  132605 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 01:14:03.553885  132605 out.go:177] * Done! kubectl is now configured to use "no-preload-584179" cluster and "default" namespace by default
	I1210 01:14:04.212381  133282 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:14:05.212520  133282 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:05.212542  133282 pod_ready.go:82] duration metric: took 3.007086471s for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.212551  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mcrmk" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.218010  133282 pod_ready.go:93] pod "kube-proxy-mcrmk" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:05.218032  133282 pod_ready.go:82] duration metric: took 5.474042ms for pod "kube-proxy-mcrmk" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.218043  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.226656  133282 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:05.226677  133282 pod_ready.go:82] duration metric: took 8.62491ms for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.226685  133282 pod_ready.go:39] duration metric: took 10.040900009s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:14:05.226701  133282 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:14:05.226760  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:14:05.245203  133282 api_server.go:72] duration metric: took 10.292259038s to wait for apiserver process to appear ...
	I1210 01:14:05.245225  133282 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:14:05.245246  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:14:05.249103  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 200:
	ok
	I1210 01:14:05.250169  133282 api_server.go:141] control plane version: v1.31.2
	I1210 01:14:05.250186  133282 api_server.go:131] duration metric: took 4.954164ms to wait for apiserver health ...
	I1210 01:14:05.250191  133282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:14:05.256313  133282 system_pods.go:59] 9 kube-system pods found
	I1210 01:14:05.256338  133282 system_pods.go:61] "coredns-7c65d6cfc9-4snjr" [ee9574b0-7c13-4fd0-b268-47bef0687b7c] Running
	I1210 01:14:05.256343  133282 system_pods.go:61] "coredns-7c65d6cfc9-wr22x" [51e6d58d-7a5a-4739-94de-c53a8c8247ca] Running
	I1210 01:14:05.256347  133282 system_pods.go:61] "etcd-default-k8s-diff-port-901295" [3e38ffc7-a02d-4449-a166-29eca58e8545] Running
	I1210 01:14:05.256351  133282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-901295" [868a8472-8c93-4376-98c9-4d2bada6bfc0] Running
	I1210 01:14:05.256355  133282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-901295" [730047f5-2dbe-4a89-858a-33e2cc0f52eb] Running
	I1210 01:14:05.256358  133282 system_pods.go:61] "kube-proxy-mcrmk" [ffc0f612-5484-46b4-9515-41e0a981287f] Running
	I1210 01:14:05.256361  133282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-901295" [f5ae0336-bb5d-47ef-94f4-bfd4674adc8e] Running
	I1210 01:14:05.256366  133282 system_pods.go:61] "metrics-server-6867b74b74-rlg4g" [9aae955e-136b-4dbb-a5a5-f7490309bf4e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:05.256376  133282 system_pods.go:61] "storage-provisioner" [06a31677-c5d7-4380-80d3-ec80b787f570] Running
	I1210 01:14:05.256383  133282 system_pods.go:74] duration metric: took 6.186387ms to wait for pod list to return data ...
	I1210 01:14:05.256391  133282 default_sa.go:34] waiting for default service account to be created ...
	I1210 01:14:05.258701  133282 default_sa.go:45] found service account: "default"
	I1210 01:14:05.258720  133282 default_sa.go:55] duration metric: took 2.322746ms for default service account to be created ...
	I1210 01:14:05.258726  133282 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 01:14:05.262756  133282 system_pods.go:86] 9 kube-system pods found
	I1210 01:14:05.262776  133282 system_pods.go:89] "coredns-7c65d6cfc9-4snjr" [ee9574b0-7c13-4fd0-b268-47bef0687b7c] Running
	I1210 01:14:05.262781  133282 system_pods.go:89] "coredns-7c65d6cfc9-wr22x" [51e6d58d-7a5a-4739-94de-c53a8c8247ca] Running
	I1210 01:14:05.262785  133282 system_pods.go:89] "etcd-default-k8s-diff-port-901295" [3e38ffc7-a02d-4449-a166-29eca58e8545] Running
	I1210 01:14:05.262791  133282 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-901295" [868a8472-8c93-4376-98c9-4d2bada6bfc0] Running
	I1210 01:14:05.262795  133282 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-901295" [730047f5-2dbe-4a89-858a-33e2cc0f52eb] Running
	I1210 01:14:05.262799  133282 system_pods.go:89] "kube-proxy-mcrmk" [ffc0f612-5484-46b4-9515-41e0a981287f] Running
	I1210 01:14:05.262802  133282 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-901295" [f5ae0336-bb5d-47ef-94f4-bfd4674adc8e] Running
	I1210 01:14:05.262808  133282 system_pods.go:89] "metrics-server-6867b74b74-rlg4g" [9aae955e-136b-4dbb-a5a5-f7490309bf4e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:05.262812  133282 system_pods.go:89] "storage-provisioner" [06a31677-c5d7-4380-80d3-ec80b787f570] Running
	I1210 01:14:05.262821  133282 system_pods.go:126] duration metric: took 4.090244ms to wait for k8s-apps to be running ...
	I1210 01:14:05.262827  133282 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 01:14:05.262881  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:14:05.275937  133282 system_svc.go:56] duration metric: took 13.102664ms WaitForService to wait for kubelet
	I1210 01:14:05.275962  133282 kubeadm.go:582] duration metric: took 10.323025026s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:14:05.275984  133282 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:14:05.278184  133282 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:14:05.278204  133282 node_conditions.go:123] node cpu capacity is 2
	I1210 01:14:05.278217  133282 node_conditions.go:105] duration metric: took 2.226803ms to run NodePressure ...
	I1210 01:14:05.278230  133282 start.go:241] waiting for startup goroutines ...
	I1210 01:14:05.278239  133282 start.go:246] waiting for cluster config update ...
	I1210 01:14:05.278249  133282 start.go:255] writing updated cluster config ...
	I1210 01:14:05.278553  133282 ssh_runner.go:195] Run: rm -f paused
	I1210 01:14:05.326078  133282 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 01:14:05.327902  133282 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-901295" cluster and "default" namespace by default
	I1210 01:14:04.852302  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:14:04.852558  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:14:44.854749  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:14:44.854980  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:14:44.854992  133241 kubeadm.go:310] 
	I1210 01:14:44.855044  133241 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 01:14:44.855104  133241 kubeadm.go:310] 		timed out waiting for the condition
	I1210 01:14:44.855115  133241 kubeadm.go:310] 
	I1210 01:14:44.855162  133241 kubeadm.go:310] 	This error is likely caused by:
	I1210 01:14:44.855217  133241 kubeadm.go:310] 		- The kubelet is not running
	I1210 01:14:44.855363  133241 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 01:14:44.855380  133241 kubeadm.go:310] 
	I1210 01:14:44.855514  133241 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 01:14:44.855565  133241 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 01:14:44.855615  133241 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 01:14:44.855625  133241 kubeadm.go:310] 
	I1210 01:14:44.855796  133241 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 01:14:44.855943  133241 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 01:14:44.855955  133241 kubeadm.go:310] 
	I1210 01:14:44.856139  133241 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 01:14:44.856299  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 01:14:44.856402  133241 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 01:14:44.856500  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 01:14:44.856525  133241 kubeadm.go:310] 
	I1210 01:14:44.856764  133241 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:14:44.856891  133241 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 01:14:44.856987  133241 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1210 01:14:44.857195  133241 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 01:14:44.857249  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:14:45.319104  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:14:45.333243  133241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:14:45.342637  133241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:14:45.342653  133241 kubeadm.go:157] found existing configuration files:
	
	I1210 01:14:45.342696  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:14:45.351179  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:14:45.351227  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:14:45.359836  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:14:45.368986  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:14:45.369041  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:14:45.378166  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:14:45.387734  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:14:45.387781  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:14:45.397866  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:14:45.406757  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:14:45.406794  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:14:45.416506  133241 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:14:45.484342  133241 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 01:14:45.484462  133241 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:14:45.624435  133241 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:14:45.624583  133241 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:14:45.624732  133241 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 01:14:45.800410  133241 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:14:45.802184  133241 out.go:235]   - Generating certificates and keys ...
	I1210 01:14:45.802296  133241 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:14:45.802393  133241 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:14:45.802504  133241 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:14:45.802601  133241 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:14:45.802707  133241 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:14:45.802780  133241 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:14:45.802867  133241 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:14:45.803320  133241 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:14:45.804003  133241 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:14:45.804623  133241 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:14:45.804904  133241 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:14:45.804997  133241 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:14:45.989500  133241 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:14:46.228462  133241 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:14:46.274395  133241 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:14:46.765291  133241 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:14:46.784318  133241 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:14:46.785620  133241 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:14:46.785694  133241 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:14:46.915963  133241 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:14:46.917607  133241 out.go:235]   - Booting up control plane ...
	I1210 01:14:46.917714  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:14:46.924564  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:14:46.925924  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:14:46.926912  133241 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:14:46.929973  133241 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 01:15:26.932207  133241 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 01:15:26.932539  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:15:26.932718  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:15:31.933200  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:15:31.933463  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:15:41.934297  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:15:41.934592  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:16:01.935227  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:16:01.935409  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:16:41.934005  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:16:41.934329  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:16:41.934361  133241 kubeadm.go:310] 
	I1210 01:16:41.934433  133241 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 01:16:41.934492  133241 kubeadm.go:310] 		timed out waiting for the condition
	I1210 01:16:41.934500  133241 kubeadm.go:310] 
	I1210 01:16:41.934550  133241 kubeadm.go:310] 	This error is likely caused by:
	I1210 01:16:41.934610  133241 kubeadm.go:310] 		- The kubelet is not running
	I1210 01:16:41.934768  133241 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 01:16:41.934791  133241 kubeadm.go:310] 
	I1210 01:16:41.934915  133241 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 01:16:41.934971  133241 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 01:16:41.935024  133241 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 01:16:41.935033  133241 kubeadm.go:310] 
	I1210 01:16:41.935184  133241 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 01:16:41.935327  133241 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 01:16:41.935346  133241 kubeadm.go:310] 
	I1210 01:16:41.935485  133241 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 01:16:41.935600  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 01:16:41.935720  133241 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 01:16:41.935818  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 01:16:41.935828  133241 kubeadm.go:310] 
	I1210 01:16:41.936518  133241 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:16:41.936630  133241 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 01:16:41.936756  133241 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1210 01:16:41.936849  133241 kubeadm.go:394] duration metric: took 7m57.690847315s to StartCluster
	I1210 01:16:41.936924  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:16:41.936994  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:16:41.979911  133241 cri.go:89] found id: ""
	I1210 01:16:41.979944  133241 logs.go:282] 0 containers: []
	W1210 01:16:41.979955  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:16:41.979964  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:16:41.980037  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:16:42.018336  133241 cri.go:89] found id: ""
	I1210 01:16:42.018366  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.018378  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:16:42.018385  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:16:42.018461  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:16:42.050036  133241 cri.go:89] found id: ""
	I1210 01:16:42.050065  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.050074  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:16:42.050080  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:16:42.050139  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:16:42.083023  133241 cri.go:89] found id: ""
	I1210 01:16:42.083051  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.083063  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:16:42.083072  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:16:42.083131  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:16:42.117900  133241 cri.go:89] found id: ""
	I1210 01:16:42.117921  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.117930  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:16:42.117936  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:16:42.117982  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:16:42.150009  133241 cri.go:89] found id: ""
	I1210 01:16:42.150041  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.150054  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:16:42.150063  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:16:42.150116  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:16:42.182606  133241 cri.go:89] found id: ""
	I1210 01:16:42.182632  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.182643  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:16:42.182650  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:16:42.182712  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:16:42.223456  133241 cri.go:89] found id: ""
	I1210 01:16:42.223486  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.223496  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:16:42.223507  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:16:42.223522  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:16:42.287081  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:16:42.287118  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:16:42.308277  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:16:42.308315  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:16:42.401928  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:16:42.401960  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:16:42.401977  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:16:42.515786  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:16:42.515829  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 01:16:42.551865  133241 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1210 01:16:42.551924  133241 out.go:270] * 
	W1210 01:16:42.552001  133241 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 01:16:42.552019  133241 out.go:270] * 
	W1210 01:16:42.552906  133241 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 01:16:42.556458  133241 out.go:201] 
	W1210 01:16:42.557556  133241 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 01:16:42.557619  133241 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 01:16:42.557649  133241 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 01:16:42.559020  133241 out.go:201] 
	
	
	==> CRI-O <==
	Dec 10 01:16:44 old-k8s-version-094470 crio[632]: time="2024-12-10 01:16:44.242382017Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793404242360837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0524d34d-8d4f-4731-a9e5-13682f7746b8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:16:44 old-k8s-version-094470 crio[632]: time="2024-12-10 01:16:44.242868001Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c018b40-bba4-4df1-b22f-453d0fc8e0ac name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:16:44 old-k8s-version-094470 crio[632]: time="2024-12-10 01:16:44.242927617Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c018b40-bba4-4df1-b22f-453d0fc8e0ac name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:16:44 old-k8s-version-094470 crio[632]: time="2024-12-10 01:16:44.242960358Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1c018b40-bba4-4df1-b22f-453d0fc8e0ac name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:16:44 old-k8s-version-094470 crio[632]: time="2024-12-10 01:16:44.272729148Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3c228f03-063f-4b0b-b162-fc6b8efbe06c name=/runtime.v1.RuntimeService/Version
	Dec 10 01:16:44 old-k8s-version-094470 crio[632]: time="2024-12-10 01:16:44.272786441Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3c228f03-063f-4b0b-b162-fc6b8efbe06c name=/runtime.v1.RuntimeService/Version
	Dec 10 01:16:44 old-k8s-version-094470 crio[632]: time="2024-12-10 01:16:44.273517810Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=68a865fb-83b7-455e-9773-5976053db7cc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:16:44 old-k8s-version-094470 crio[632]: time="2024-12-10 01:16:44.273853225Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793404273837272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68a865fb-83b7-455e-9773-5976053db7cc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:16:44 old-k8s-version-094470 crio[632]: time="2024-12-10 01:16:44.274581626Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=71e89129-b100-4b8c-947c-9608c39c2243 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:16:44 old-k8s-version-094470 crio[632]: time="2024-12-10 01:16:44.274646337Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=71e89129-b100-4b8c-947c-9608c39c2243 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:16:44 old-k8s-version-094470 crio[632]: time="2024-12-10 01:16:44.274706326Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=71e89129-b100-4b8c-947c-9608c39c2243 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:16:44 old-k8s-version-094470 crio[632]: time="2024-12-10 01:16:44.304258317Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=74540432-c02e-4524-940a-5c2dc114c990 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:16:44 old-k8s-version-094470 crio[632]: time="2024-12-10 01:16:44.304318059Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=74540432-c02e-4524-940a-5c2dc114c990 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:16:44 old-k8s-version-094470 crio[632]: time="2024-12-10 01:16:44.305358629Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1516c8fe-3d6b-48cb-bfae-a8db1a607b4e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:16:44 old-k8s-version-094470 crio[632]: time="2024-12-10 01:16:44.305795298Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793404305774661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1516c8fe-3d6b-48cb-bfae-a8db1a607b4e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:16:44 old-k8s-version-094470 crio[632]: time="2024-12-10 01:16:44.306239361Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d53b4127-6a0a-4d29-b086-90bb9e4ef74a name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:16:44 old-k8s-version-094470 crio[632]: time="2024-12-10 01:16:44.306330719Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d53b4127-6a0a-4d29-b086-90bb9e4ef74a name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:16:44 old-k8s-version-094470 crio[632]: time="2024-12-10 01:16:44.306374411Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d53b4127-6a0a-4d29-b086-90bb9e4ef74a name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:16:44 old-k8s-version-094470 crio[632]: time="2024-12-10 01:16:44.334768575Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=901e044f-2b9a-4135-9d85-4edd2f5a6875 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:16:44 old-k8s-version-094470 crio[632]: time="2024-12-10 01:16:44.334857048Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=901e044f-2b9a-4135-9d85-4edd2f5a6875 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:16:44 old-k8s-version-094470 crio[632]: time="2024-12-10 01:16:44.336258919Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=00a0917e-2f01-487f-909e-64bca8d7e3c0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:16:44 old-k8s-version-094470 crio[632]: time="2024-12-10 01:16:44.336685138Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793404336658417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=00a0917e-2f01-487f-909e-64bca8d7e3c0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:16:44 old-k8s-version-094470 crio[632]: time="2024-12-10 01:16:44.338251809Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=30279249-e5fe-43a8-9b7d-a222f2a3c3fd name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:16:44 old-k8s-version-094470 crio[632]: time="2024-12-10 01:16:44.338324080Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=30279249-e5fe-43a8-9b7d-a222f2a3c3fd name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:16:44 old-k8s-version-094470 crio[632]: time="2024-12-10 01:16:44.338355658Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=30279249-e5fe-43a8-9b7d-a222f2a3c3fd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 01:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.058441] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038433] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.955123] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.919200] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.577947] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.210341] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.056035] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052496] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.200301] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.121921] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.235690] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +5.849695] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.064134] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.756376] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[ +13.680417] kauditd_printk_skb: 46 callbacks suppressed
	[Dec10 01:12] systemd-fstab-generator[5121]: Ignoring "noauto" option for root device
	[Dec10 01:14] systemd-fstab-generator[5409]: Ignoring "noauto" option for root device
	[  +0.065463] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:16:44 up 8 min,  0 users,  load average: 0.09, 0.11, 0.08
	Linux old-k8s-version-094470 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 10 01:16:41 old-k8s-version-094470 kubelet[5589]: internal/singleflight.(*Group).doCall(0x70c5750, 0xc000101d10, 0xc0006f88d0, 0x23, 0xc00064ae00)
	Dec 10 01:16:41 old-k8s-version-094470 kubelet[5589]:         /usr/local/go/src/internal/singleflight/singleflight.go:95 +0x2e
	Dec 10 01:16:41 old-k8s-version-094470 kubelet[5589]: created by internal/singleflight.(*Group).DoChan
	Dec 10 01:16:41 old-k8s-version-094470 kubelet[5589]:         /usr/local/go/src/internal/singleflight/singleflight.go:88 +0x2cc
	Dec 10 01:16:41 old-k8s-version-094470 kubelet[5589]: goroutine 164 [runnable]:
	Dec 10 01:16:41 old-k8s-version-094470 kubelet[5589]: net._C2func_getaddrinfo(0xc0008d14c0, 0x0, 0xc000d03290, 0xc00077e390, 0x0, 0x0, 0x0)
	Dec 10 01:16:41 old-k8s-version-094470 kubelet[5589]:         _cgo_gotypes.go:94 +0x55
	Dec 10 01:16:41 old-k8s-version-094470 kubelet[5589]: net.cgoLookupIPCNAME.func1(0xc0008d14c0, 0x20, 0x20, 0xc000d03290, 0xc00077e390, 0x0, 0xc0008bf6a0, 0x57a492)
	Dec 10 01:16:41 old-k8s-version-094470 kubelet[5589]:         /usr/local/go/src/net/cgo_unix.go:161 +0xc5
	Dec 10 01:16:41 old-k8s-version-094470 kubelet[5589]: net.cgoLookupIPCNAME(0x48ab5d6, 0x3, 0xc0006f88a0, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Dec 10 01:16:41 old-k8s-version-094470 kubelet[5589]:         /usr/local/go/src/net/cgo_unix.go:161 +0x16b
	Dec 10 01:16:41 old-k8s-version-094470 kubelet[5589]: net.cgoIPLookup(0xc0002f3aa0, 0x48ab5d6, 0x3, 0xc0006f88a0, 0x1f)
	Dec 10 01:16:41 old-k8s-version-094470 kubelet[5589]:         /usr/local/go/src/net/cgo_unix.go:218 +0x67
	Dec 10 01:16:41 old-k8s-version-094470 kubelet[5589]: created by net.cgoLookupIP
	Dec 10 01:16:41 old-k8s-version-094470 kubelet[5589]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Dec 10 01:16:41 old-k8s-version-094470 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 10 01:16:41 old-k8s-version-094470 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 01:16:42 old-k8s-version-094470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Dec 10 01:16:42 old-k8s-version-094470 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 10 01:16:42 old-k8s-version-094470 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 10 01:16:42 old-k8s-version-094470 kubelet[5640]: I1210 01:16:42.329520    5640 server.go:416] Version: v1.20.0
	Dec 10 01:16:42 old-k8s-version-094470 kubelet[5640]: I1210 01:16:42.329850    5640 server.go:837] Client rotation is on, will bootstrap in background
	Dec 10 01:16:42 old-k8s-version-094470 kubelet[5640]: I1210 01:16:42.331848    5640 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 10 01:16:42 old-k8s-version-094470 kubelet[5640]: W1210 01:16:42.332791    5640 manager.go:159] Cannot detect current cgroup on cgroup v2
	Dec 10 01:16:42 old-k8s-version-094470 kubelet[5640]: I1210 01:16:42.332915    5640 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-094470 -n old-k8s-version-094470
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-094470 -n old-k8s-version-094470: exit status 2 (243.709705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-094470" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (726.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-274758 -n embed-certs-274758
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-10 01:22:28.880497189 +0000 UTC m=+5941.960135339
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-274758 -n embed-certs-274758
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-274758 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-274758 logs -n 25: (1.920425559s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-094470                              | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-options-086522                                 | cert-options-086522          | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 00:58 UTC |
	| start   | -p no-preload-584179                                   | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 01:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-481624                           | kubernetes-upgrade-481624    | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 00:58 UTC |
	| start   | -p embed-certs-274758                                  | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 01:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-290541                              | cert-expiration-290541       | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-290541                              | cert-expiration-290541       | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-371895 | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | disable-driver-mounts-371895                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:02 UTC |
	|         | default-k8s-diff-port-901295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-584179             | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-584179                                   | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-274758            | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-274758                                  | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-901295  | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:02 UTC | 10 Dec 24 01:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:02 UTC |                     |
	|         | default-k8s-diff-port-901295                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-094470        | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-584179                  | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-274758                 | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-584179                                   | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC | 10 Dec 24 01:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-274758                                  | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC | 10 Dec 24 01:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-901295       | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-094470                              | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC | 10 Dec 24 01:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-094470             | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC | 10 Dec 24 01:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-094470                              | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC | 10 Dec 24 01:14 UTC |
	|         | default-k8s-diff-port-901295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/10 01:04:42
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 01:04:42.604554  133282 out.go:345] Setting OutFile to fd 1 ...
	I1210 01:04:42.604645  133282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:04:42.604652  133282 out.go:358] Setting ErrFile to fd 2...
	I1210 01:04:42.604657  133282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:04:42.604818  133282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 01:04:42.605325  133282 out.go:352] Setting JSON to false
	I1210 01:04:42.606230  133282 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10034,"bootTime":1733782649,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 01:04:42.606360  133282 start.go:139] virtualization: kvm guest
	I1210 01:04:42.608505  133282 out.go:177] * [default-k8s-diff-port-901295] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 01:04:42.609651  133282 notify.go:220] Checking for updates...
	I1210 01:04:42.609661  133282 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 01:04:42.610866  133282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 01:04:42.611986  133282 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:04:42.613055  133282 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 01:04:42.614094  133282 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 01:04:42.615160  133282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 01:04:42.616546  133282 config.go:182] Loaded profile config "default-k8s-diff-port-901295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:04:42.616942  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:04:42.617000  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:04:42.631861  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39355
	I1210 01:04:42.632399  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:04:42.632966  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:04:42.632988  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:04:42.633389  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:04:42.633558  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:04:42.633822  133282 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 01:04:42.634105  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:04:42.634139  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:04:42.648371  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38389
	I1210 01:04:42.648775  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:04:42.649217  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:04:42.649238  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:04:42.649580  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:04:42.649752  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:04:42.680926  133282 out.go:177] * Using the kvm2 driver based on existing profile
	I1210 01:04:42.682339  133282 start.go:297] selected driver: kvm2
	I1210 01:04:42.682365  133282 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:04:42.682487  133282 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 01:04:42.683148  133282 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 01:04:42.683220  133282 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 01:04:42.697586  133282 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1210 01:04:42.697938  133282 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:04:42.697970  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:04:42.698011  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:04:42.698042  133282 start.go:340] cluster config:
	{Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:04:42.698139  133282 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 01:04:42.699685  133282 out.go:177] * Starting "default-k8s-diff-port-901295" primary control-plane node in "default-k8s-diff-port-901295" cluster
	I1210 01:04:39.721352  133241 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1210 01:04:39.721383  133241 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1210 01:04:39.721392  133241 cache.go:56] Caching tarball of preloaded images
	I1210 01:04:39.721455  133241 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 01:04:39.721464  133241 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1210 01:04:39.721545  133241 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/config.json ...
	I1210 01:04:39.721707  133241 start.go:360] acquireMachinesLock for old-k8s-version-094470: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 01:04:44.574793  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:04:42.700760  133282 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:04:42.700790  133282 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1210 01:04:42.700799  133282 cache.go:56] Caching tarball of preloaded images
	I1210 01:04:42.700867  133282 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 01:04:42.700878  133282 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 01:04:42.700976  133282 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/config.json ...
	I1210 01:04:42.701136  133282 start.go:360] acquireMachinesLock for default-k8s-diff-port-901295: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 01:04:50.654849  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:04:53.726828  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:04:59.806818  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:02.878819  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:08.958855  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:12.030796  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:18.110838  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:21.182849  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:27.262801  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:30.334793  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:36.414830  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:39.486794  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:45.566825  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:48.639043  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:54.718789  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:57.790796  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:03.870824  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:06.942805  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:13.023037  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:16.094961  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:22.174798  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:25.246892  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:31.326818  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:34.398846  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:40.478809  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:43.550800  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:49.630777  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:52.702808  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:58.783007  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:01.854776  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:07.934835  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:11.006837  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:17.086805  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:20.158819  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:26.238836  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:29.311060  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:35.390827  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:38.462976  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:44.542806  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:47.614800  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:53.694819  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:56.766790  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:59.770632  132693 start.go:364] duration metric: took 4m32.843409632s to acquireMachinesLock for "embed-certs-274758"
	I1210 01:07:59.770698  132693 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:07:59.770705  132693 fix.go:54] fixHost starting: 
	I1210 01:07:59.771174  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:07:59.771226  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:07:59.787289  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45723
	I1210 01:07:59.787787  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:07:59.788234  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:07:59.788258  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:07:59.788645  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:07:59.788824  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:07:59.788958  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:07:59.790595  132693 fix.go:112] recreateIfNeeded on embed-certs-274758: state=Stopped err=<nil>
	I1210 01:07:59.790631  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	W1210 01:07:59.790790  132693 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:07:59.792515  132693 out.go:177] * Restarting existing kvm2 VM for "embed-certs-274758" ...
	I1210 01:07:59.793607  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Start
	I1210 01:07:59.793771  132693 main.go:141] libmachine: (embed-certs-274758) Ensuring networks are active...
	I1210 01:07:59.794532  132693 main.go:141] libmachine: (embed-certs-274758) Ensuring network default is active
	I1210 01:07:59.794864  132693 main.go:141] libmachine: (embed-certs-274758) Ensuring network mk-embed-certs-274758 is active
	I1210 01:07:59.795317  132693 main.go:141] libmachine: (embed-certs-274758) Getting domain xml...
	I1210 01:07:59.796099  132693 main.go:141] libmachine: (embed-certs-274758) Creating domain...
	I1210 01:08:00.982632  132693 main.go:141] libmachine: (embed-certs-274758) Waiting to get IP...
	I1210 01:08:00.983591  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:00.984037  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:00.984077  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:00.984002  133990 retry.go:31] will retry after 285.753383ms: waiting for machine to come up
	I1210 01:08:01.272035  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:01.272490  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:01.272514  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:01.272423  133990 retry.go:31] will retry after 309.245833ms: waiting for machine to come up
	I1210 01:08:01.582873  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:01.583336  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:01.583382  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:01.583288  133990 retry.go:31] will retry after 451.016986ms: waiting for machine to come up
	I1210 01:07:59.768336  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:07:59.768370  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:07:59.768666  132605 buildroot.go:166] provisioning hostname "no-preload-584179"
	I1210 01:07:59.768702  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:07:59.768894  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:07:59.770491  132605 machine.go:96] duration metric: took 4m37.429107505s to provisionDockerMachine
	I1210 01:07:59.770535  132605 fix.go:56] duration metric: took 4m37.448303416s for fixHost
	I1210 01:07:59.770542  132605 start.go:83] releasing machines lock for "no-preload-584179", held for 4m37.448340626s
	W1210 01:07:59.770589  132605 start.go:714] error starting host: provision: host is not running
	W1210 01:07:59.770743  132605 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1210 01:07:59.770759  132605 start.go:729] Will try again in 5 seconds ...
	I1210 01:08:02.035970  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:02.036421  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:02.036443  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:02.036382  133990 retry.go:31] will retry after 408.436756ms: waiting for machine to come up
	I1210 01:08:02.445970  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:02.446515  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:02.446550  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:02.446445  133990 retry.go:31] will retry after 612.819219ms: waiting for machine to come up
	I1210 01:08:03.061377  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:03.061850  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:03.061879  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:03.061795  133990 retry.go:31] will retry after 867.345457ms: waiting for machine to come up
	I1210 01:08:03.930866  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:03.931316  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:03.931340  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:03.931259  133990 retry.go:31] will retry after 758.429736ms: waiting for machine to come up
	I1210 01:08:04.691061  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:04.691480  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:04.691511  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:04.691430  133990 retry.go:31] will retry after 1.278419765s: waiting for machine to come up
	I1210 01:08:05.972206  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:05.972645  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:05.972677  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:05.972596  133990 retry.go:31] will retry after 1.726404508s: waiting for machine to come up
	I1210 01:08:04.770968  132605 start.go:360] acquireMachinesLock for no-preload-584179: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 01:08:07.700170  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:07.700593  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:07.700615  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:07.700544  133990 retry.go:31] will retry after 2.286681333s: waiting for machine to come up
	I1210 01:08:09.989072  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:09.989424  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:09.989447  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:09.989383  133990 retry.go:31] will retry after 2.723565477s: waiting for machine to come up
	I1210 01:08:12.716204  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:12.716656  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:12.716680  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:12.716618  133990 retry.go:31] will retry after 3.619683155s: waiting for machine to come up
	I1210 01:08:16.338854  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.339271  132693 main.go:141] libmachine: (embed-certs-274758) Found IP for machine: 192.168.72.76
	I1210 01:08:16.339301  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has current primary IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.339306  132693 main.go:141] libmachine: (embed-certs-274758) Reserving static IP address...
	I1210 01:08:16.339656  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "embed-certs-274758", mac: "52:54:00:d3:3c:b1", ip: "192.168.72.76"} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.339683  132693 main.go:141] libmachine: (embed-certs-274758) DBG | skip adding static IP to network mk-embed-certs-274758 - found existing host DHCP lease matching {name: "embed-certs-274758", mac: "52:54:00:d3:3c:b1", ip: "192.168.72.76"}
	I1210 01:08:16.339695  132693 main.go:141] libmachine: (embed-certs-274758) Reserved static IP address: 192.168.72.76
	I1210 01:08:16.339703  132693 main.go:141] libmachine: (embed-certs-274758) Waiting for SSH to be available...
	I1210 01:08:16.339715  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Getting to WaitForSSH function...
	I1210 01:08:16.341531  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.341776  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.341804  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.341963  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Using SSH client type: external
	I1210 01:08:16.341995  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa (-rw-------)
	I1210 01:08:16.342030  132693 main.go:141] libmachine: (embed-certs-274758) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.76 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:08:16.342047  132693 main.go:141] libmachine: (embed-certs-274758) DBG | About to run SSH command:
	I1210 01:08:16.342061  132693 main.go:141] libmachine: (embed-certs-274758) DBG | exit 0
	I1210 01:08:16.465930  132693 main.go:141] libmachine: (embed-certs-274758) DBG | SSH cmd err, output: <nil>: 
	I1210 01:08:16.466310  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetConfigRaw
	I1210 01:08:16.466921  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:16.469152  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.469472  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.469501  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.469754  132693 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/config.json ...
	I1210 01:08:16.469962  132693 machine.go:93] provisionDockerMachine start ...
	I1210 01:08:16.469982  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:16.470197  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.472368  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.472740  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.472765  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.472888  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.473052  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.473222  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.473325  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.473500  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:16.473737  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:16.473752  132693 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:08:16.581932  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:08:16.581963  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetMachineName
	I1210 01:08:16.582183  132693 buildroot.go:166] provisioning hostname "embed-certs-274758"
	I1210 01:08:16.582213  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetMachineName
	I1210 01:08:16.582412  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.584799  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.585092  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.585124  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.585264  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.585415  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.585568  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.585701  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.585836  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:16.586010  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:16.586026  132693 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-274758 && echo "embed-certs-274758" | sudo tee /etc/hostname
	I1210 01:08:16.707226  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-274758
	
	I1210 01:08:16.707260  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.709905  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.710192  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.710223  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.710428  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.710632  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.710828  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.710957  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.711127  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:16.711339  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:16.711356  132693 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-274758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-274758/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-274758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:08:17.578801  133241 start.go:364] duration metric: took 3m37.857041189s to acquireMachinesLock for "old-k8s-version-094470"
	I1210 01:08:17.578868  133241 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:08:17.578876  133241 fix.go:54] fixHost starting: 
	I1210 01:08:17.579295  133241 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:08:17.579353  133241 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:08:17.595770  133241 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46811
	I1210 01:08:17.596141  133241 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:08:17.596669  133241 main.go:141] libmachine: Using API Version  1
	I1210 01:08:17.596693  133241 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:08:17.597084  133241 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:08:17.597263  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:17.597405  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetState
	I1210 01:08:17.598931  133241 fix.go:112] recreateIfNeeded on old-k8s-version-094470: state=Stopped err=<nil>
	I1210 01:08:17.598957  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	W1210 01:08:17.599124  133241 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:08:17.600962  133241 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-094470" ...
	I1210 01:08:16.831001  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:08:16.831032  132693 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:08:16.831063  132693 buildroot.go:174] setting up certificates
	I1210 01:08:16.831074  132693 provision.go:84] configureAuth start
	I1210 01:08:16.831084  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetMachineName
	I1210 01:08:16.831362  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:16.833916  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.834282  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.834318  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.834446  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.836770  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.837061  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.837083  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.837216  132693 provision.go:143] copyHostCerts
	I1210 01:08:16.837284  132693 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:08:16.837303  132693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:08:16.837357  132693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:08:16.837447  132693 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:08:16.837455  132693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:08:16.837478  132693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:08:16.837528  132693 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:08:16.837535  132693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:08:16.837554  132693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:08:16.837609  132693 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.embed-certs-274758 san=[127.0.0.1 192.168.72.76 embed-certs-274758 localhost minikube]
	I1210 01:08:16.953590  132693 provision.go:177] copyRemoteCerts
	I1210 01:08:16.953649  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:08:16.953676  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.956012  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.956347  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.956384  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.956544  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.956703  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.956828  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.956951  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.039674  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:08:17.061125  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1210 01:08:17.082062  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 01:08:17.102519  132693 provision.go:87] duration metric: took 271.416512ms to configureAuth
	I1210 01:08:17.102554  132693 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:08:17.102745  132693 config.go:182] Loaded profile config "embed-certs-274758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:08:17.102858  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.105469  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.105818  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.105849  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.105976  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.106169  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.106326  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.106468  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.106639  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:17.106804  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:17.106817  132693 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:08:17.339841  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:08:17.339873  132693 machine.go:96] duration metric: took 869.895063ms to provisionDockerMachine
	I1210 01:08:17.339888  132693 start.go:293] postStartSetup for "embed-certs-274758" (driver="kvm2")
	I1210 01:08:17.339902  132693 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:08:17.339921  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.340256  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:08:17.340295  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.342633  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.342947  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.342973  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.343127  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.343294  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.343441  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.343545  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.428245  132693 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:08:17.432486  132693 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:08:17.432507  132693 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:08:17.432568  132693 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:08:17.432650  132693 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:08:17.432756  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:08:17.441892  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:17.464515  132693 start.go:296] duration metric: took 124.610801ms for postStartSetup
	I1210 01:08:17.464558  132693 fix.go:56] duration metric: took 17.693851707s for fixHost
	I1210 01:08:17.464592  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.467173  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.467470  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.467494  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.467622  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.467829  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.467976  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.468111  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.468253  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:17.468418  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:17.468429  132693 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:08:17.578630  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792897.551711245
	
	I1210 01:08:17.578653  132693 fix.go:216] guest clock: 1733792897.551711245
	I1210 01:08:17.578662  132693 fix.go:229] Guest: 2024-12-10 01:08:17.551711245 +0000 UTC Remote: 2024-12-10 01:08:17.464575547 +0000 UTC m=+290.672639525 (delta=87.135698ms)
	I1210 01:08:17.578690  132693 fix.go:200] guest clock delta is within tolerance: 87.135698ms
	I1210 01:08:17.578697  132693 start.go:83] releasing machines lock for "embed-certs-274758", held for 17.808018239s
	I1210 01:08:17.578727  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.578978  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:17.581740  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.582079  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.582105  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.582272  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.582792  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.582970  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.583053  132693 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:08:17.583108  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.583173  132693 ssh_runner.go:195] Run: cat /version.json
	I1210 01:08:17.583203  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.585727  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586056  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586096  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.586121  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586268  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.586447  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.586496  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.586525  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586661  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.586665  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.586853  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.586851  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.587016  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.587145  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.689525  132693 ssh_runner.go:195] Run: systemctl --version
	I1210 01:08:17.696586  132693 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:08:17.838483  132693 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:08:17.844291  132693 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:08:17.844381  132693 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:08:17.858838  132693 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:08:17.858864  132693 start.go:495] detecting cgroup driver to use...
	I1210 01:08:17.858926  132693 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:08:17.875144  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:08:17.887694  132693 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:08:17.887750  132693 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:08:17.900263  132693 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:08:17.916462  132693 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:08:18.050837  132693 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:08:18.237065  132693 docker.go:233] disabling docker service ...
	I1210 01:08:18.237134  132693 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:08:18.254596  132693 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:08:18.267028  132693 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:08:18.384379  132693 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:08:18.511930  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:08:18.525729  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:08:18.544642  132693 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 01:08:18.544693  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.555569  132693 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:08:18.555629  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.565952  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.575954  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.589571  132693 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:08:18.604400  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.615079  132693 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.631811  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.641877  132693 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:08:18.651229  132693 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:08:18.651284  132693 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:08:18.663922  132693 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:08:18.673755  132693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:18.804115  132693 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:08:18.902371  132693 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:08:18.902453  132693 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:08:18.906806  132693 start.go:563] Will wait 60s for crictl version
	I1210 01:08:18.906876  132693 ssh_runner.go:195] Run: which crictl
	I1210 01:08:18.910409  132693 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:08:18.957196  132693 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:08:18.957293  132693 ssh_runner.go:195] Run: crio --version
	I1210 01:08:18.983326  132693 ssh_runner.go:195] Run: crio --version
	I1210 01:08:19.021374  132693 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 01:08:17.602512  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .Start
	I1210 01:08:17.602729  133241 main.go:141] libmachine: (old-k8s-version-094470) Ensuring networks are active...
	I1210 01:08:17.603418  133241 main.go:141] libmachine: (old-k8s-version-094470) Ensuring network default is active
	I1210 01:08:17.603788  133241 main.go:141] libmachine: (old-k8s-version-094470) Ensuring network mk-old-k8s-version-094470 is active
	I1210 01:08:17.604284  133241 main.go:141] libmachine: (old-k8s-version-094470) Getting domain xml...
	I1210 01:08:17.605020  133241 main.go:141] libmachine: (old-k8s-version-094470) Creating domain...
	I1210 01:08:18.869767  133241 main.go:141] libmachine: (old-k8s-version-094470) Waiting to get IP...
	I1210 01:08:18.870786  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:18.871226  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:18.871282  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:18.871190  134112 retry.go:31] will retry after 260.195661ms: waiting for machine to come up
	I1210 01:08:19.132624  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:19.133091  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:19.133113  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:19.133034  134112 retry.go:31] will retry after 241.852579ms: waiting for machine to come up
	I1210 01:08:19.376814  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:19.377485  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:19.377520  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:19.377420  134112 retry.go:31] will retry after 410.574957ms: waiting for machine to come up
	I1210 01:08:19.023096  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:19.026231  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:19.026697  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:19.026740  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:19.026981  132693 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1210 01:08:19.031042  132693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:19.043510  132693 kubeadm.go:883] updating cluster {Name:embed-certs-274758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-274758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:08:19.043679  132693 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:08:19.043747  132693 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:19.075804  132693 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 01:08:19.075875  132693 ssh_runner.go:195] Run: which lz4
	I1210 01:08:19.079498  132693 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 01:08:19.083365  132693 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 01:08:19.083394  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1210 01:08:20.282126  132693 crio.go:462] duration metric: took 1.202670831s to copy over tarball
	I1210 01:08:20.282224  132693 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 01:08:19.790282  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:19.790868  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:19.790898  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:19.790828  134112 retry.go:31] will retry after 535.183165ms: waiting for machine to come up
	I1210 01:08:20.327434  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:20.327936  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:20.327972  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:20.327862  134112 retry.go:31] will retry after 729.193633ms: waiting for machine to come up
	I1210 01:08:21.058815  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:21.059274  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:21.059302  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:21.059224  134112 retry.go:31] will retry after 578.788415ms: waiting for machine to come up
	I1210 01:08:21.640036  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:21.640572  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:21.640604  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:21.640523  134112 retry.go:31] will retry after 1.113559472s: waiting for machine to come up
	I1210 01:08:22.755259  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:22.755716  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:22.755741  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:22.755681  134112 retry.go:31] will retry after 940.416935ms: waiting for machine to come up
	I1210 01:08:23.698216  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:23.698652  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:23.698684  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:23.698608  134112 retry.go:31] will retry after 1.575038679s: waiting for machine to come up
	I1210 01:08:22.359701  132693 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.077440918s)
	I1210 01:08:22.359757  132693 crio.go:469] duration metric: took 2.077602088s to extract the tarball
	I1210 01:08:22.359770  132693 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 01:08:22.404915  132693 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:22.444497  132693 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 01:08:22.444531  132693 cache_images.go:84] Images are preloaded, skipping loading
	I1210 01:08:22.444543  132693 kubeadm.go:934] updating node { 192.168.72.76 8443 v1.31.2 crio true true} ...
	I1210 01:08:22.444702  132693 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-274758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-274758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:08:22.444801  132693 ssh_runner.go:195] Run: crio config
	I1210 01:08:22.484278  132693 cni.go:84] Creating CNI manager for ""
	I1210 01:08:22.484301  132693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:08:22.484311  132693 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:08:22.484345  132693 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.76 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-274758 NodeName:embed-certs-274758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.76"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.76 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 01:08:22.484508  132693 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.76
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-274758"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.76"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.76"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:08:22.484573  132693 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 01:08:22.493746  132693 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:08:22.493827  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:08:22.503898  132693 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1210 01:08:22.520349  132693 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:08:22.536653  132693 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1210 01:08:22.553389  132693 ssh_runner.go:195] Run: grep 192.168.72.76	control-plane.minikube.internal$ /etc/hosts
	I1210 01:08:22.556933  132693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.76	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:22.569060  132693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:22.709124  132693 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:08:22.728316  132693 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758 for IP: 192.168.72.76
	I1210 01:08:22.728342  132693 certs.go:194] generating shared ca certs ...
	I1210 01:08:22.728382  132693 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:08:22.728564  132693 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:08:22.728619  132693 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:08:22.728633  132693 certs.go:256] generating profile certs ...
	I1210 01:08:22.728764  132693 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/client.key
	I1210 01:08:22.728852  132693 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/apiserver.key.ec69c041
	I1210 01:08:22.728906  132693 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/proxy-client.key
	I1210 01:08:22.729067  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:08:22.729121  132693 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:08:22.729144  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:08:22.729186  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:08:22.729223  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:08:22.729254  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:08:22.729313  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:22.730259  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:08:22.786992  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:08:22.813486  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:08:22.840236  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:08:22.870078  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1210 01:08:22.896484  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 01:08:22.917547  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:08:22.940550  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 01:08:22.964784  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:08:22.987389  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:08:23.009860  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:08:23.032300  132693 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:08:23.048611  132693 ssh_runner.go:195] Run: openssl version
	I1210 01:08:23.053927  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:08:23.064731  132693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:08:23.068872  132693 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:08:23.068917  132693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:08:23.074207  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:08:23.085278  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:08:23.096087  132693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:23.100106  132693 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:23.100155  132693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:23.105408  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:08:23.114862  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:08:23.124112  132693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:08:23.127915  132693 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:08:23.127958  132693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:08:23.132972  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:08:23.142672  132693 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:08:23.146554  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:08:23.152071  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:08:23.157606  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:08:23.162974  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:08:23.168059  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:08:23.173354  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:08:23.178612  132693 kubeadm.go:392] StartCluster: {Name:embed-certs-274758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-274758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:08:23.178733  132693 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:08:23.178788  132693 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:23.214478  132693 cri.go:89] found id: ""
	I1210 01:08:23.214545  132693 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:08:23.223871  132693 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:08:23.223897  132693 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:08:23.223956  132693 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:08:23.232839  132693 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:08:23.233836  132693 kubeconfig.go:125] found "embed-certs-274758" server: "https://192.168.72.76:8443"
	I1210 01:08:23.235958  132693 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:08:23.244484  132693 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.76
	I1210 01:08:23.244517  132693 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:08:23.244529  132693 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:08:23.244578  132693 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:23.282997  132693 cri.go:89] found id: ""
	I1210 01:08:23.283063  132693 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:08:23.298971  132693 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:08:23.307664  132693 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:08:23.307690  132693 kubeadm.go:157] found existing configuration files:
	
	I1210 01:08:23.307739  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:08:23.316208  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:08:23.316259  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:08:23.324410  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:08:23.332254  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:08:23.332303  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:08:23.340482  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:08:23.348584  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:08:23.348636  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:08:23.356760  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:08:23.364508  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:08:23.364564  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:08:23.372644  132693 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:08:23.380791  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:23.481384  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.558104  132693 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.076675674s)
	I1210 01:08:24.558155  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.743002  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.812833  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.910903  132693 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:08:24.911007  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:25.411815  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:25.911457  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:26.411340  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:25.276751  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:25.277027  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:25.277058  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:25.276996  134112 retry.go:31] will retry after 1.531276871s: waiting for machine to come up
	I1210 01:08:26.809860  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:26.810332  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:26.810365  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:26.810270  134112 retry.go:31] will retry after 2.029725217s: waiting for machine to come up
	I1210 01:08:28.842419  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:28.842945  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:28.842979  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:28.842895  134112 retry.go:31] will retry after 2.777752063s: waiting for machine to come up
	I1210 01:08:26.911681  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:26.925244  132693 api_server.go:72] duration metric: took 2.014341005s to wait for apiserver process to appear ...
	I1210 01:08:26.925276  132693 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:08:26.925307  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:29.461167  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:08:29.461199  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:08:29.461221  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:29.490907  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:08:29.490935  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:08:29.925947  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:29.938161  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:08:29.938197  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:08:30.425822  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:30.448700  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:08:30.448741  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:08:30.926368  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:30.930770  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 200:
	ok
	I1210 01:08:30.936664  132693 api_server.go:141] control plane version: v1.31.2
	I1210 01:08:30.936706  132693 api_server.go:131] duration metric: took 4.011421056s to wait for apiserver health ...
	I1210 01:08:30.936719  132693 cni.go:84] Creating CNI manager for ""
	I1210 01:08:30.936731  132693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:08:30.938509  132693 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:08:30.939651  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:08:30.949390  132693 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:08:30.973739  132693 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:08:30.988397  132693 system_pods.go:59] 8 kube-system pods found
	I1210 01:08:30.988441  132693 system_pods.go:61] "coredns-7c65d6cfc9-g98k2" [4358eb5a-fa28-405d-b6a4-66d232c1b060] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 01:08:30.988451  132693 system_pods.go:61] "etcd-embed-certs-274758" [11343776-d268-428f-9af8-4d20e4c1dda4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 01:08:30.988461  132693 system_pods.go:61] "kube-apiserver-embed-certs-274758" [c60d7a8e-e029-47ec-8f9d-5531aaeeb595] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 01:08:30.988471  132693 system_pods.go:61] "kube-controller-manager-embed-certs-274758" [53c0e257-c3c1-410b-8ce5-8350530160c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 01:08:30.988478  132693 system_pods.go:61] "kube-proxy-d29zg" [cbf2dba9-1c85-4e21-bf0b-01cf3fcd00df] Running
	I1210 01:08:30.988503  132693 system_pods.go:61] "kube-scheduler-embed-certs-274758" [6ecaa7c9-f7b6-450d-941c-8ccf582af275] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 01:08:30.988516  132693 system_pods.go:61] "metrics-server-6867b74b74-mhxtf" [2874a85a-c957-4056-b60e-be170f3c1ab2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:08:30.988527  132693 system_pods.go:61] "storage-provisioner" [7e2b93e2-0f25-4bb1-bca6-02a8ea5336ed] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 01:08:30.988539  132693 system_pods.go:74] duration metric: took 14.779044ms to wait for pod list to return data ...
	I1210 01:08:30.988567  132693 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:08:30.993600  132693 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:08:30.993632  132693 node_conditions.go:123] node cpu capacity is 2
	I1210 01:08:30.993652  132693 node_conditions.go:105] duration metric: took 5.074866ms to run NodePressure ...
	I1210 01:08:30.993680  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:31.251140  132693 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 01:08:31.254339  132693 kubeadm.go:739] kubelet initialised
	I1210 01:08:31.254358  132693 kubeadm.go:740] duration metric: took 3.193934ms waiting for restarted kubelet to initialise ...
	I1210 01:08:31.254367  132693 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:08:31.259628  132693 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.264379  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.264406  132693 pod_ready.go:82] duration metric: took 4.746678ms for pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.264417  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.264434  132693 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.268773  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "etcd-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.268794  132693 pod_ready.go:82] duration metric: took 4.345772ms for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.268804  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "etcd-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.268812  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.272890  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.272911  132693 pod_ready.go:82] duration metric: took 4.087379ms for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.272921  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.272929  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.377990  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.378020  132693 pod_ready.go:82] duration metric: took 105.077792ms for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.378033  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.378041  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-d29zg" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.777563  132693 pod_ready.go:93] pod "kube-proxy-d29zg" in "kube-system" namespace has status "Ready":"True"
	I1210 01:08:31.777584  132693 pod_ready.go:82] duration metric: took 399.533068ms for pod "kube-proxy-d29zg" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.777598  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.623742  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:31.624253  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:31.624289  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:31.624189  134112 retry.go:31] will retry after 3.852910592s: waiting for machine to come up
	I1210 01:08:36.766538  133282 start.go:364] duration metric: took 3m54.06534367s to acquireMachinesLock for "default-k8s-diff-port-901295"
	I1210 01:08:36.766623  133282 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:08:36.766636  133282 fix.go:54] fixHost starting: 
	I1210 01:08:36.767069  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:08:36.767139  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:08:36.785475  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41443
	I1210 01:08:36.786023  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:08:36.786614  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:08:36.786640  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:08:36.786956  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:08:36.787147  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:36.787295  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:08:36.788719  133282 fix.go:112] recreateIfNeeded on default-k8s-diff-port-901295: state=Stopped err=<nil>
	I1210 01:08:36.788745  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	W1210 01:08:36.788889  133282 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:08:36.791479  133282 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-901295" ...
	I1210 01:08:33.784092  132693 pod_ready.go:103] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:35.784732  132693 pod_ready.go:103] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:36.792712  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Start
	I1210 01:08:36.792883  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Ensuring networks are active...
	I1210 01:08:36.793559  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Ensuring network default is active
	I1210 01:08:36.793891  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Ensuring network mk-default-k8s-diff-port-901295 is active
	I1210 01:08:36.794354  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Getting domain xml...
	I1210 01:08:36.795038  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Creating domain...
	I1210 01:08:35.480373  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.480901  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has current primary IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.480926  133241 main.go:141] libmachine: (old-k8s-version-094470) Found IP for machine: 192.168.61.11
	I1210 01:08:35.480955  133241 main.go:141] libmachine: (old-k8s-version-094470) Reserving static IP address...
	I1210 01:08:35.481323  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "old-k8s-version-094470", mac: "52:54:00:00:f3:52", ip: "192.168.61.11"} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.481352  133241 main.go:141] libmachine: (old-k8s-version-094470) Reserved static IP address: 192.168.61.11
	I1210 01:08:35.481370  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | skip adding static IP to network mk-old-k8s-version-094470 - found existing host DHCP lease matching {name: "old-k8s-version-094470", mac: "52:54:00:00:f3:52", ip: "192.168.61.11"}
	I1210 01:08:35.481392  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | Getting to WaitForSSH function...
	I1210 01:08:35.481408  133241 main.go:141] libmachine: (old-k8s-version-094470) Waiting for SSH to be available...
	I1210 01:08:35.483785  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.484269  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.484314  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.484458  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | Using SSH client type: external
	I1210 01:08:35.484493  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa (-rw-------)
	I1210 01:08:35.484526  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:08:35.484548  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | About to run SSH command:
	I1210 01:08:35.484557  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | exit 0
	I1210 01:08:35.610216  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | SSH cmd err, output: <nil>: 
	I1210 01:08:35.610554  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetConfigRaw
	I1210 01:08:35.611179  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:35.613811  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.614184  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.614221  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.614448  133241 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/config.json ...
	I1210 01:08:35.614659  133241 machine.go:93] provisionDockerMachine start ...
	I1210 01:08:35.614681  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:35.614861  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.616965  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.617478  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.617507  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.617606  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:35.617741  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.617880  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.617993  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:35.618166  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:35.618416  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:35.618431  133241 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:08:35.730293  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:08:35.730326  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 01:08:35.730614  133241 buildroot.go:166] provisioning hostname "old-k8s-version-094470"
	I1210 01:08:35.730647  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 01:08:35.730902  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.733604  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.733943  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.733963  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.734110  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:35.734290  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.734436  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.734589  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:35.734737  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:35.734921  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:35.734937  133241 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-094470 && echo "old-k8s-version-094470" | sudo tee /etc/hostname
	I1210 01:08:35.856219  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-094470
	
	I1210 01:08:35.856272  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.859777  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.860157  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.860194  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.860364  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:35.860590  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.860808  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.860948  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:35.861145  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:35.861370  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:35.861391  133241 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-094470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-094470/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-094470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:08:35.984487  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:08:35.984523  133241 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:08:35.984571  133241 buildroot.go:174] setting up certificates
	I1210 01:08:35.984585  133241 provision.go:84] configureAuth start
	I1210 01:08:35.984596  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 01:08:35.984888  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:35.987515  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.987891  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.987920  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.988078  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.990428  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.990806  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.990838  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.991028  133241 provision.go:143] copyHostCerts
	I1210 01:08:35.991108  133241 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:08:35.991125  133241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:08:35.991208  133241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:08:35.991378  133241 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:08:35.991396  133241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:08:35.991436  133241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:08:35.991548  133241 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:08:35.991560  133241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:08:35.991593  133241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:08:35.991684  133241 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-094470 san=[127.0.0.1 192.168.61.11 localhost minikube old-k8s-version-094470]
	I1210 01:08:36.166767  133241 provision.go:177] copyRemoteCerts
	I1210 01:08:36.166825  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:08:36.166872  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.169777  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.170166  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.170196  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.170452  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.170662  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.170837  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.170985  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.255600  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:08:36.277974  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1210 01:08:36.299608  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 01:08:36.320325  133241 provision.go:87] duration metric: took 335.730286ms to configureAuth
	I1210 01:08:36.320346  133241 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:08:36.320502  133241 config.go:182] Loaded profile config "old-k8s-version-094470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1210 01:08:36.320572  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.323358  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.323810  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.323836  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.324012  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.324213  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.324351  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.324479  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.324608  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:36.324773  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:36.324789  133241 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:08:36.538020  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:08:36.538052  133241 machine.go:96] duration metric: took 923.37742ms to provisionDockerMachine
	I1210 01:08:36.538065  133241 start.go:293] postStartSetup for "old-k8s-version-094470" (driver="kvm2")
	I1210 01:08:36.538075  133241 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:08:36.538092  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.538437  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:08:36.538473  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.540836  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.541187  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.541229  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.541400  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.541594  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.541728  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.541852  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.623740  133241 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:08:36.627323  133241 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:08:36.627343  133241 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:08:36.627405  133241 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:08:36.627487  133241 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:08:36.627568  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:08:36.635720  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:36.656793  133241 start.go:296] duration metric: took 118.715633ms for postStartSetup
	I1210 01:08:36.656832  133241 fix.go:56] duration metric: took 19.077955657s for fixHost
	I1210 01:08:36.656853  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.659288  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.659586  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.659618  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.659772  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.659961  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.660132  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.660250  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.660391  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:36.660552  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:36.660562  133241 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:08:36.766355  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792916.738645658
	
	I1210 01:08:36.766375  133241 fix.go:216] guest clock: 1733792916.738645658
	I1210 01:08:36.766382  133241 fix.go:229] Guest: 2024-12-10 01:08:36.738645658 +0000 UTC Remote: 2024-12-10 01:08:36.656836618 +0000 UTC m=+237.074026661 (delta=81.80904ms)
	I1210 01:08:36.766420  133241 fix.go:200] guest clock delta is within tolerance: 81.80904ms
	I1210 01:08:36.766429  133241 start.go:83] releasing machines lock for "old-k8s-version-094470", held for 19.187587757s
	I1210 01:08:36.766461  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.766761  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:36.769758  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.770129  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.770150  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.770309  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.770818  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.770992  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.771090  133241 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:08:36.771157  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.771182  133241 ssh_runner.go:195] Run: cat /version.json
	I1210 01:08:36.771203  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.773923  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774103  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774272  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.774292  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774434  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.774545  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.774585  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774616  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.774728  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.774817  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.774843  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.774975  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.775004  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.775148  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.875634  133241 ssh_runner.go:195] Run: systemctl --version
	I1210 01:08:36.880774  133241 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:08:37.023282  133241 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:08:37.029380  133241 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:08:37.029436  133241 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:08:37.044071  133241 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:08:37.044093  133241 start.go:495] detecting cgroup driver to use...
	I1210 01:08:37.044157  133241 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:08:37.058626  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:08:37.070607  133241 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:08:37.070659  133241 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:08:37.086913  133241 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:08:37.102676  133241 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:08:37.221862  133241 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:08:37.373086  133241 docker.go:233] disabling docker service ...
	I1210 01:08:37.373166  133241 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:08:37.386711  133241 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:08:37.399414  133241 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:08:37.546237  133241 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:08:37.660681  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:08:37.673736  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:08:37.690107  133241 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1210 01:08:37.690180  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.700871  133241 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:08:37.700920  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.711545  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.722078  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.732603  133241 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:08:37.743617  133241 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:08:37.753641  133241 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:08:37.753699  133241 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:08:37.765737  133241 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:08:37.774173  133241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:37.891188  133241 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:08:37.983170  133241 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:08:37.983248  133241 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:08:37.987987  133241 start.go:563] Will wait 60s for crictl version
	I1210 01:08:37.988049  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:37.993150  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:08:38.045191  133241 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:08:38.045281  133241 ssh_runner.go:195] Run: crio --version
	I1210 01:08:38.071768  133241 ssh_runner.go:195] Run: crio --version
	I1210 01:08:38.100869  133241 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1210 01:08:38.102141  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:38.104790  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:38.105112  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:38.105143  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:38.105337  133241 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1210 01:08:38.109454  133241 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:38.120925  133241 kubeadm.go:883] updating cluster {Name:old-k8s-version-094470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:08:38.121060  133241 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1210 01:08:38.121130  133241 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:38.169400  133241 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1210 01:08:38.169462  133241 ssh_runner.go:195] Run: which lz4
	I1210 01:08:38.172973  133241 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 01:08:38.176684  133241 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 01:08:38.176715  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1210 01:08:38.285566  132693 pod_ready.go:103] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:38.784437  132693 pod_ready.go:93] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:08:38.784470  132693 pod_ready.go:82] duration metric: took 7.006865777s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:38.784480  132693 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:40.791489  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:38.076463  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting to get IP...
	I1210 01:08:38.077256  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.077650  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.077706  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:38.077616  134254 retry.go:31] will retry after 287.089061ms: waiting for machine to come up
	I1210 01:08:38.366347  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.366906  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.366937  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:38.366866  134254 retry.go:31] will retry after 359.654145ms: waiting for machine to come up
	I1210 01:08:38.728592  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.729111  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.729144  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:38.729048  134254 retry.go:31] will retry after 299.617496ms: waiting for machine to come up
	I1210 01:08:39.030785  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.031359  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.031382  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:39.031312  134254 retry.go:31] will retry after 586.950887ms: waiting for machine to come up
	I1210 01:08:39.620247  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.620872  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.620903  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:39.620802  134254 retry.go:31] will retry after 623.103267ms: waiting for machine to come up
	I1210 01:08:40.245322  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.245640  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.245669  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:40.245600  134254 retry.go:31] will retry after 712.603102ms: waiting for machine to come up
	I1210 01:08:40.960316  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.960862  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.960892  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:40.960806  134254 retry.go:31] will retry after 999.356089ms: waiting for machine to come up
	I1210 01:08:41.961395  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:41.961902  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:41.961929  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:41.961862  134254 retry.go:31] will retry after 1.050049361s: waiting for machine to come up
	I1210 01:08:39.654620  133241 crio.go:462] duration metric: took 1.481673499s to copy over tarball
	I1210 01:08:39.654705  133241 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 01:08:42.473447  133241 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.818699717s)
	I1210 01:08:42.473486  133241 crio.go:469] duration metric: took 2.818833041s to extract the tarball
	I1210 01:08:42.473496  133241 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 01:08:42.514635  133241 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:42.546161  133241 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1210 01:08:42.546204  133241 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 01:08:42.546276  133241 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:08:42.546315  133241 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.546339  133241 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.546344  133241 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.546347  133241 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.546306  133241 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1210 01:08:42.546372  133241 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.546315  133241 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.548134  133241 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.548150  133241 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1210 01:08:42.548149  133241 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.548162  133241 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:08:42.548134  133241 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.548135  133241 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.548138  133241 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.548326  133241 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.700402  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.706096  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.716669  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.717025  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.723380  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.727890  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.740867  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1210 01:08:42.775300  133241 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1210 01:08:42.775345  133241 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.775393  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.827802  133241 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1210 01:08:42.827855  133241 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.827873  133241 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1210 01:08:42.827906  133241 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.827936  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.827953  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.851952  133241 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1210 01:08:42.851998  133241 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.852063  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872369  133241 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1210 01:08:42.872408  133241 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.872446  133241 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1210 01:08:42.872479  133241 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1210 01:08:42.872489  133241 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.872497  133241 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1210 01:08:42.872516  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.872535  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872535  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872458  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872578  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.872638  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.872672  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.952963  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.952964  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 01:08:42.956464  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.956535  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.956580  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.956614  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.956681  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:43.035636  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 01:08:43.086938  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:43.087032  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:43.104765  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:43.104844  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:43.104891  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 01:08:43.109871  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:43.122137  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 01:08:43.193838  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:43.256301  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:43.256342  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1210 01:08:43.256431  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1210 01:08:43.258819  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1210 01:08:43.258928  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1210 01:08:43.259011  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1210 01:08:43.281411  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1210 01:08:43.300319  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1210 01:08:43.334327  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:08:43.478183  133241 cache_images.go:92] duration metric: took 931.957836ms to LoadCachedImages
	W1210 01:08:43.478292  133241 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1210 01:08:43.478310  133241 kubeadm.go:934] updating node { 192.168.61.11 8443 v1.20.0 crio true true} ...
	I1210 01:08:43.478501  133241 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-094470 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:08:43.478610  133241 ssh_runner.go:195] Run: crio config
	I1210 01:08:43.523627  133241 cni.go:84] Creating CNI manager for ""
	I1210 01:08:43.523651  133241 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:08:43.523660  133241 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:08:43.523680  133241 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.11 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-094470 NodeName:old-k8s-version-094470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1210 01:08:43.523872  133241 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-094470"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:08:43.523947  133241 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1210 01:08:43.534926  133241 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:08:43.535015  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:08:43.544420  133241 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1210 01:08:43.561582  133241 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:08:43.578427  133241 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1210 01:08:43.595593  133241 ssh_runner.go:195] Run: grep 192.168.61.11	control-plane.minikube.internal$ /etc/hosts
	I1210 01:08:43.599137  133241 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:43.610483  133241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:43.750543  133241 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:08:43.766573  133241 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470 for IP: 192.168.61.11
	I1210 01:08:43.766599  133241 certs.go:194] generating shared ca certs ...
	I1210 01:08:43.766628  133241 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:08:43.766828  133241 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:08:43.766881  133241 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:08:43.766897  133241 certs.go:256] generating profile certs ...
	I1210 01:08:43.767022  133241 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.key
	I1210 01:08:43.767097  133241 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.key.11e7a196
	I1210 01:08:43.767158  133241 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.key
	I1210 01:08:43.767318  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:08:43.767359  133241 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:08:43.767391  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:08:43.767428  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:08:43.767461  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:08:43.767502  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:08:43.767554  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:43.768599  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:08:43.825215  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:08:43.852218  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:08:43.888256  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:08:43.921633  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1210 01:08:43.954815  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 01:08:43.986660  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:08:44.009065  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 01:08:44.030476  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:08:44.053232  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:08:44.078371  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:08:44.100076  133241 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:08:44.115731  133241 ssh_runner.go:195] Run: openssl version
	I1210 01:08:44.121192  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:08:44.130554  133241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:08:44.134639  133241 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:08:44.134697  133241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:08:44.140323  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:08:44.150593  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:08:44.160638  133241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:44.165053  133241 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:44.165121  133241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:44.170391  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:08:44.180113  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:08:44.189938  133241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:08:44.193880  133241 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:08:44.193931  133241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:08:44.199419  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:08:44.209346  133241 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:08:44.213474  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:08:44.218965  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:08:44.224344  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:08:44.229835  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:08:44.235365  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:08:44.240697  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:08:44.245999  133241 kubeadm.go:392] StartCluster: {Name:old-k8s-version-094470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:08:44.246102  133241 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:08:44.246164  133241 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:44.287050  133241 cri.go:89] found id: ""
	I1210 01:08:44.287167  133241 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:08:44.297028  133241 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:08:44.297044  133241 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:08:44.297092  133241 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:08:44.306118  133241 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:08:44.307143  133241 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-094470" does not appear in /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:08:44.307777  133241 kubeconfig.go:62] /home/jenkins/minikube-integration/20062-79135/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-094470" cluster setting kubeconfig missing "old-k8s-version-094470" context setting]
	I1210 01:08:44.308663  133241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:08:44.394164  133241 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:08:44.406683  133241 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.11
	I1210 01:08:44.406723  133241 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:08:44.406739  133241 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:08:44.406799  133241 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:44.444917  133241 cri.go:89] found id: ""
	I1210 01:08:44.444995  133241 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:08:44.465693  133241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:08:44.475399  133241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:08:44.475424  133241 kubeadm.go:157] found existing configuration files:
	
	I1210 01:08:44.475482  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:08:44.483802  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:08:44.483844  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:08:44.492395  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:08:44.501080  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:08:44.501141  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:08:44.509973  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:08:44.518103  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:08:44.518176  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:08:44.527145  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:08:44.535124  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:08:44.535179  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:08:44.543773  133241 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:08:44.552533  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:42.791894  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:45.934242  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:43.013971  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:43.014430  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:43.014467  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:43.014369  134254 retry.go:31] will retry after 1.273602138s: waiting for machine to come up
	I1210 01:08:44.289131  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:44.289686  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:44.289720  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:44.289616  134254 retry.go:31] will retry after 1.911761795s: waiting for machine to come up
	I1210 01:08:46.203851  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:46.204263  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:46.204321  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:46.204199  134254 retry.go:31] will retry after 2.653257729s: waiting for machine to come up
	I1210 01:08:44.667527  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.368529  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.572674  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.671006  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.759483  133241 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:08:45.759588  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:46.260599  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:46.759851  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:47.260403  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:47.760555  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:48.259665  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:48.760390  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:49.260450  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:48.292324  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:50.789665  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:48.859690  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:48.860078  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:48.860108  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:48.860029  134254 retry.go:31] will retry after 3.186060231s: waiting for machine to come up
	I1210 01:08:52.048071  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:52.048524  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:52.048554  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:52.048478  134254 retry.go:31] will retry after 2.823038983s: waiting for machine to come up
	I1210 01:08:49.759795  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:50.260493  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:50.760146  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:51.259783  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:51.760554  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:52.260543  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:52.760452  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:53.260523  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:53.759677  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:54.259750  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:56.158844  132605 start.go:364] duration metric: took 51.38781342s to acquireMachinesLock for "no-preload-584179"
	I1210 01:08:56.158913  132605 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:08:56.158923  132605 fix.go:54] fixHost starting: 
	I1210 01:08:56.159339  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:08:56.159381  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:08:56.178552  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34315
	I1210 01:08:56.178997  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:08:56.179471  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:08:56.179497  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:08:56.179803  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:08:56.179977  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:08:56.180119  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:08:56.181496  132605 fix.go:112] recreateIfNeeded on no-preload-584179: state=Stopped err=<nil>
	I1210 01:08:56.181521  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	W1210 01:08:56.181661  132605 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:08:56.183508  132605 out.go:177] * Restarting existing kvm2 VM for "no-preload-584179" ...
	I1210 01:08:52.790210  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:54.790515  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:56.184725  132605 main.go:141] libmachine: (no-preload-584179) Calling .Start
	I1210 01:08:56.184883  132605 main.go:141] libmachine: (no-preload-584179) Ensuring networks are active...
	I1210 01:08:56.185680  132605 main.go:141] libmachine: (no-preload-584179) Ensuring network default is active
	I1210 01:08:56.186043  132605 main.go:141] libmachine: (no-preload-584179) Ensuring network mk-no-preload-584179 is active
	I1210 01:08:56.186427  132605 main.go:141] libmachine: (no-preload-584179) Getting domain xml...
	I1210 01:08:56.187126  132605 main.go:141] libmachine: (no-preload-584179) Creating domain...
	I1210 01:08:54.875474  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.875880  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has current primary IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.875902  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Found IP for machine: 192.168.39.193
	I1210 01:08:54.875918  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Reserving static IP address...
	I1210 01:08:54.876379  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-901295", mac: "52:54:00:f7:2f:3d", ip: "192.168.39.193"} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:54.876411  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Reserved static IP address: 192.168.39.193
	I1210 01:08:54.876434  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | skip adding static IP to network mk-default-k8s-diff-port-901295 - found existing host DHCP lease matching {name: "default-k8s-diff-port-901295", mac: "52:54:00:f7:2f:3d", ip: "192.168.39.193"}
	I1210 01:08:54.876456  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Getting to WaitForSSH function...
	I1210 01:08:54.876473  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for SSH to be available...
	I1210 01:08:54.878454  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.878758  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:54.878787  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.878940  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Using SSH client type: external
	I1210 01:08:54.878969  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa (-rw-------)
	I1210 01:08:54.878993  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.193 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:08:54.879003  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | About to run SSH command:
	I1210 01:08:54.879011  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | exit 0
	I1210 01:08:55.006046  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | SSH cmd err, output: <nil>: 
	I1210 01:08:55.006394  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetConfigRaw
	I1210 01:08:55.007100  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:55.009429  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.009753  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.009803  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.010054  133282 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/config.json ...
	I1210 01:08:55.010278  133282 machine.go:93] provisionDockerMachine start ...
	I1210 01:08:55.010302  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:55.010513  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.012899  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.013198  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.013248  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.013340  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.013509  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.013643  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.013726  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.013879  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.014070  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.014081  133282 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:08:55.126262  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:08:55.126294  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetMachineName
	I1210 01:08:55.126547  133282 buildroot.go:166] provisioning hostname "default-k8s-diff-port-901295"
	I1210 01:08:55.126592  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetMachineName
	I1210 01:08:55.126756  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.129397  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.129769  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.129798  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.129921  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.130071  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.130187  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.130279  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.130380  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.130545  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.130572  133282 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-901295 && echo "default-k8s-diff-port-901295" | sudo tee /etc/hostname
	I1210 01:08:55.256829  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-901295
	
	I1210 01:08:55.256857  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.259599  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.259977  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.260006  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.260257  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.260456  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.260645  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.260795  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.260996  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.261212  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.261239  133282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-901295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-901295/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-901295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:08:55.387808  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:08:55.387837  133282 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:08:55.387872  133282 buildroot.go:174] setting up certificates
	I1210 01:08:55.387883  133282 provision.go:84] configureAuth start
	I1210 01:08:55.387897  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetMachineName
	I1210 01:08:55.388193  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:55.391297  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.391649  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.391683  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.391799  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.393859  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.394150  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.394176  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.394272  133282 provision.go:143] copyHostCerts
	I1210 01:08:55.394336  133282 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:08:55.394353  133282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:08:55.394411  133282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:08:55.394501  133282 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:08:55.394508  133282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:08:55.394530  133282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:08:55.394615  133282 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:08:55.394624  133282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:08:55.394643  133282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:08:55.394693  133282 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-901295 san=[127.0.0.1 192.168.39.193 default-k8s-diff-port-901295 localhost minikube]
	I1210 01:08:55.502253  133282 provision.go:177] copyRemoteCerts
	I1210 01:08:55.502313  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:08:55.502341  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.504919  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.505216  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.505252  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.505425  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.505613  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.505749  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.505932  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:55.593242  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:08:55.616378  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 01:08:55.638786  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 01:08:55.660268  133282 provision.go:87] duration metric: took 272.369019ms to configureAuth
	I1210 01:08:55.660293  133282 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:08:55.660506  133282 config.go:182] Loaded profile config "default-k8s-diff-port-901295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:08:55.660597  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.662964  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.663283  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.663312  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.663461  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.663656  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.663820  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.663944  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.664091  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.664330  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.664354  133282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:08:55.918356  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:08:55.918389  133282 machine.go:96] duration metric: took 908.095325ms to provisionDockerMachine
	I1210 01:08:55.918402  133282 start.go:293] postStartSetup for "default-k8s-diff-port-901295" (driver="kvm2")
	I1210 01:08:55.918415  133282 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:08:55.918450  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:55.918790  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:08:55.918823  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.921575  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.921897  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.921929  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.922026  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.922205  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.922375  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.922485  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:56.008442  133282 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:08:56.012149  133282 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:08:56.012165  133282 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:08:56.012239  133282 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:08:56.012325  133282 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:08:56.012428  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:08:56.021144  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:56.042869  133282 start.go:296] duration metric: took 124.452091ms for postStartSetup
	I1210 01:08:56.042914  133282 fix.go:56] duration metric: took 19.276278483s for fixHost
	I1210 01:08:56.042940  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:56.045280  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.045612  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.045644  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.045845  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:56.046002  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.046123  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.046224  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:56.046353  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:56.046530  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:56.046541  133282 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:08:56.158690  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792936.125620375
	
	I1210 01:08:56.158714  133282 fix.go:216] guest clock: 1733792936.125620375
	I1210 01:08:56.158722  133282 fix.go:229] Guest: 2024-12-10 01:08:56.125620375 +0000 UTC Remote: 2024-12-10 01:08:56.042918319 +0000 UTC m=+253.475376365 (delta=82.702056ms)
	I1210 01:08:56.158741  133282 fix.go:200] guest clock delta is within tolerance: 82.702056ms
	I1210 01:08:56.158746  133282 start.go:83] releasing machines lock for "default-k8s-diff-port-901295", held for 19.392149024s
	I1210 01:08:56.158769  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.159017  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:56.161998  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.162321  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.162350  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.162541  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.163022  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.163197  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.163296  133282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:08:56.163346  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:56.163449  133282 ssh_runner.go:195] Run: cat /version.json
	I1210 01:08:56.163481  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:56.166071  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.166443  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.166475  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.166500  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.166750  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:56.166897  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.166920  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.166929  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.167083  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:56.167089  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:56.167255  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.167258  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:56.167400  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:56.167529  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:56.273144  133282 ssh_runner.go:195] Run: systemctl --version
	I1210 01:08:56.278678  133282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:08:56.423921  133282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:08:56.429467  133282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:08:56.429537  133282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:08:56.443900  133282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:08:56.443927  133282 start.go:495] detecting cgroup driver to use...
	I1210 01:08:56.443996  133282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:08:56.458653  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:08:56.471717  133282 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:08:56.471798  133282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:08:56.483960  133282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:08:56.495903  133282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:08:56.604493  133282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:08:56.741771  133282 docker.go:233] disabling docker service ...
	I1210 01:08:56.741846  133282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:08:56.755264  133282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:08:56.767590  133282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:08:56.922151  133282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:08:57.045410  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:08:57.061217  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:08:57.079488  133282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 01:08:57.079552  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.090356  133282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:08:57.090434  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.100784  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.111326  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.120417  133282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:08:57.129871  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.140489  133282 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.157524  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.167947  133282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:08:57.176904  133282 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:08:57.176947  133282 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:08:57.188925  133282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:08:57.197558  133282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:57.319427  133282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:08:57.419493  133282 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:08:57.419570  133282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:08:57.424302  133282 start.go:563] Will wait 60s for crictl version
	I1210 01:08:57.424362  133282 ssh_runner.go:195] Run: which crictl
	I1210 01:08:57.428067  133282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:08:57.468247  133282 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:08:57.468319  133282 ssh_runner.go:195] Run: crio --version
	I1210 01:08:57.497834  133282 ssh_runner.go:195] Run: crio --version
	I1210 01:08:57.527032  133282 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 01:08:57.528284  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:57.531510  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:57.531882  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:57.531908  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:57.532178  133282 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 01:08:57.536149  133282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:57.548081  133282 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:08:57.548221  133282 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:08:57.548283  133282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:57.585539  133282 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 01:08:57.585619  133282 ssh_runner.go:195] Run: which lz4
	I1210 01:08:57.590131  133282 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 01:08:57.595506  133282 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 01:08:57.595534  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1210 01:08:54.760444  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:55.259774  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:55.759929  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:56.260379  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:56.759985  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:57.260495  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:57.759699  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:58.260475  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:58.759732  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:59.260424  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:57.291502  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:59.792026  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:01.793182  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:57.453911  132605 main.go:141] libmachine: (no-preload-584179) Waiting to get IP...
	I1210 01:08:57.455000  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:57.455393  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:57.455472  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:57.455384  134419 retry.go:31] will retry after 189.932045ms: waiting for machine to come up
	I1210 01:08:57.646978  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:57.647486  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:57.647520  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:57.647418  134419 retry.go:31] will retry after 278.873511ms: waiting for machine to come up
	I1210 01:08:57.928222  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:57.928797  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:57.928837  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:57.928738  134419 retry.go:31] will retry after 468.940412ms: waiting for machine to come up
	I1210 01:08:58.399469  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:58.400105  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:58.400131  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:58.400041  134419 retry.go:31] will retry after 459.796386ms: waiting for machine to come up
	I1210 01:08:58.861581  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:58.862042  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:58.862075  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:58.861985  134419 retry.go:31] will retry after 493.349488ms: waiting for machine to come up
	I1210 01:08:59.356810  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:59.357338  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:59.357365  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:59.357314  134419 retry.go:31] will retry after 736.790492ms: waiting for machine to come up
	I1210 01:09:00.095779  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:00.096246  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:00.096281  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:00.096182  134419 retry.go:31] will retry after 1.059095907s: waiting for machine to come up
	I1210 01:09:01.157286  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:01.157718  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:01.157759  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:01.157656  134419 retry.go:31] will retry after 1.18137171s: waiting for machine to come up
	I1210 01:08:58.835009  133282 crio.go:462] duration metric: took 1.24490918s to copy over tarball
	I1210 01:08:58.835108  133282 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 01:09:00.985062  133282 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.149905713s)
	I1210 01:09:00.985097  133282 crio.go:469] duration metric: took 2.150055868s to extract the tarball
	I1210 01:09:00.985108  133282 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 01:09:01.032869  133282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:09:01.074578  133282 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 01:09:01.074609  133282 cache_images.go:84] Images are preloaded, skipping loading
	I1210 01:09:01.074618  133282 kubeadm.go:934] updating node { 192.168.39.193 8444 v1.31.2 crio true true} ...
	I1210 01:09:01.074727  133282 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-901295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:09:01.074794  133282 ssh_runner.go:195] Run: crio config
	I1210 01:09:01.133905  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:09:01.133943  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:01.133965  133282 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:09:01.133999  133282 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.193 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-901295 NodeName:default-k8s-diff-port-901295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 01:09:01.134201  133282 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.193
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-901295"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.193"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.193"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:09:01.134264  133282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 01:09:01.147844  133282 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:09:01.147931  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:09:01.160432  133282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 01:09:01.180526  133282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:09:01.200698  133282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1210 01:09:01.216799  133282 ssh_runner.go:195] Run: grep 192.168.39.193	control-plane.minikube.internal$ /etc/hosts
	I1210 01:09:01.220381  133282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.193	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:09:01.233079  133282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:01.361483  133282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:09:01.380679  133282 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295 for IP: 192.168.39.193
	I1210 01:09:01.380702  133282 certs.go:194] generating shared ca certs ...
	I1210 01:09:01.380722  133282 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:01.380921  133282 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:09:01.380994  133282 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:09:01.381010  133282 certs.go:256] generating profile certs ...
	I1210 01:09:01.381136  133282 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/client.key
	I1210 01:09:01.381229  133282 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/apiserver.key.b900309b
	I1210 01:09:01.381286  133282 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/proxy-client.key
	I1210 01:09:01.381437  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:09:01.381489  133282 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:09:01.381500  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:09:01.381537  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:09:01.381568  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:09:01.381598  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:09:01.381658  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:09:01.382643  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:09:01.437062  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:09:01.472383  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:09:01.503832  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:09:01.532159  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 01:09:01.555926  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 01:09:01.578213  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:09:01.599047  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 01:09:01.620628  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:09:01.643326  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:09:01.665846  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:09:01.688854  133282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:09:01.706519  133282 ssh_runner.go:195] Run: openssl version
	I1210 01:09:01.712053  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:09:01.722297  133282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:09:01.726404  133282 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:09:01.726491  133282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:09:01.731901  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:09:01.745040  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:09:01.758663  133282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:01.763894  133282 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:01.763945  133282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:01.771019  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:09:01.781071  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:09:01.790898  133282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:09:01.795494  133282 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:09:01.795557  133282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:09:01.800996  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:09:01.811221  133282 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:09:01.815412  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:09:01.821621  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:09:01.829028  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:09:01.838361  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:09:01.844663  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:09:01.850154  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:09:01.855539  133282 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:09:01.855625  133282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:09:01.855663  133282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:01.898021  133282 cri.go:89] found id: ""
	I1210 01:09:01.898095  133282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:09:01.908929  133282 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:09:01.908947  133282 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:09:01.909005  133282 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:09:01.917830  133282 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:09:01.918982  133282 kubeconfig.go:125] found "default-k8s-diff-port-901295" server: "https://192.168.39.193:8444"
	I1210 01:09:01.921394  133282 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:09:01.930263  133282 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.193
	I1210 01:09:01.930291  133282 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:09:01.930304  133282 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:09:01.930352  133282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:01.966094  133282 cri.go:89] found id: ""
	I1210 01:09:01.966195  133282 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:09:01.983212  133282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:09:01.991944  133282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:09:01.991963  133282 kubeadm.go:157] found existing configuration files:
	
	I1210 01:09:01.992011  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 01:09:02.000043  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:09:02.000094  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:09:02.008538  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 01:09:02.016658  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:09:02.016718  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:09:02.025191  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 01:09:02.033198  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:09:02.033235  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:09:02.041713  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 01:09:02.049752  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:09:02.049801  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:09:02.058162  133282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:09:02.067001  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:02.178210  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:59.760246  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:00.260582  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:00.760701  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:01.259686  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:01.759889  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:02.260232  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:02.759769  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:03.259935  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:03.760670  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.260443  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.289731  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:06.291608  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:02.340685  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:02.341201  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:02.341233  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:02.341148  134419 retry.go:31] will retry after 1.149002375s: waiting for machine to come up
	I1210 01:09:03.491439  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:03.491777  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:03.491803  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:03.491742  134419 retry.go:31] will retry after 2.260301884s: waiting for machine to come up
	I1210 01:09:05.753701  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:05.754207  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:05.754245  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:05.754151  134419 retry.go:31] will retry after 2.19021466s: waiting for machine to come up
	I1210 01:09:03.022068  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:03.230465  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:03.288423  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:03.380544  133282 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:09:03.380653  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:03.881388  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.381638  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.881652  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.380981  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.394784  133282 api_server.go:72] duration metric: took 2.014238708s to wait for apiserver process to appear ...
	I1210 01:09:05.394817  133282 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:09:05.394854  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:07.865790  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:07.865818  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:07.865831  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:07.881775  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:07.881807  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:07.894896  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:07.914874  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:07.914905  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:08.395143  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:08.404338  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:08.404370  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:08.895743  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:08.906401  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:08.906439  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:09.394905  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:09.400326  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 200:
	ok
	I1210 01:09:09.411040  133282 api_server.go:141] control plane version: v1.31.2
	I1210 01:09:09.411080  133282 api_server.go:131] duration metric: took 4.016246339s to wait for apiserver health ...
	I1210 01:09:09.411090  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:09:09.411096  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:09.412738  133282 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:09:04.760421  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.260154  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.760313  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:06.259902  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:06.760365  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:07.260060  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:07.759720  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:08.260052  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:08.759734  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:09.260736  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:08.291848  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:10.790539  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:07.946992  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:07.947528  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:07.947561  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:07.947474  134419 retry.go:31] will retry after 3.212306699s: waiting for machine to come up
	I1210 01:09:11.163716  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:11.164132  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:11.164163  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:11.164092  134419 retry.go:31] will retry after 3.275164589s: waiting for machine to come up
	I1210 01:09:09.413907  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:09:09.423631  133282 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:09:09.440030  133282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:09:09.449054  133282 system_pods.go:59] 8 kube-system pods found
	I1210 01:09:09.449081  133282 system_pods.go:61] "coredns-7c65d6cfc9-qbdpj" [eec04b43-145a-4cae-9085-185b573be507] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 01:09:09.449088  133282 system_pods.go:61] "etcd-default-k8s-diff-port-901295" [c8c570b0-2e66-4cf5-bed6-20ee655ad679] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 01:09:09.449100  133282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-901295" [42b2ad48-8b92-4ba4-8a14-6c3e6bdec4a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 01:09:09.449116  133282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-901295" [bd2c0e9d-cb31-46a5-b12e-ab70ed05c8e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 01:09:09.449127  133282 system_pods.go:61] "kube-proxy-5szz9" [957bab4d-6329-41b4-9980-aaa17133201e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 01:09:09.449135  133282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-901295" [1729b062-1bfe-447f-b9ed-29813c7f056a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 01:09:09.449144  133282 system_pods.go:61] "metrics-server-6867b74b74-zpj2g" [cdfb5b8e-5b7f-4fc8-8ad8-07ea92f7f737] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:09:09.449150  133282 system_pods.go:61] "storage-provisioner" [342f814b-f510-4a3b-b27d-52ebbdf85275] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 01:09:09.449159  133282 system_pods.go:74] duration metric: took 9.110007ms to wait for pod list to return data ...
	I1210 01:09:09.449168  133282 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:09:09.452778  133282 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:09:09.452806  133282 node_conditions.go:123] node cpu capacity is 2
	I1210 01:09:09.452818  133282 node_conditions.go:105] duration metric: took 3.643268ms to run NodePressure ...
	I1210 01:09:09.452837  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:09.728171  133282 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 01:09:09.732074  133282 kubeadm.go:739] kubelet initialised
	I1210 01:09:09.732096  133282 kubeadm.go:740] duration metric: took 3.900542ms waiting for restarted kubelet to initialise ...
	I1210 01:09:09.732106  133282 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:09.736406  133282 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.740516  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.740534  133282 pod_ready.go:82] duration metric: took 4.104848ms for pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.740543  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.740549  133282 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.744293  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.744311  133282 pod_ready.go:82] duration metric: took 3.755781ms for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.744321  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.744326  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.748023  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.748045  133282 pod_ready.go:82] duration metric: took 3.712559ms for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.748062  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.748070  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.843581  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.843607  133282 pod_ready.go:82] duration metric: took 95.52817ms for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.843621  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.843632  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5szz9" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:10.242986  133282 pod_ready.go:93] pod "kube-proxy-5szz9" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:10.243015  133282 pod_ready.go:82] duration metric: took 399.37468ms for pod "kube-proxy-5szz9" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:10.243025  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:12.249815  133282 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:09.760075  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:10.259970  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:10.760547  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:11.259999  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:11.760315  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:12.260121  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:12.760217  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:13.259996  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:13.760635  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:14.259738  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:13.290686  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:15.792057  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:14.440802  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.441315  132605 main.go:141] libmachine: (no-preload-584179) Found IP for machine: 192.168.50.169
	I1210 01:09:14.441338  132605 main.go:141] libmachine: (no-preload-584179) Reserving static IP address...
	I1210 01:09:14.441355  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has current primary IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.441776  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "no-preload-584179", mac: "52:54:00:94:5e:a7", ip: "192.168.50.169"} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.441830  132605 main.go:141] libmachine: (no-preload-584179) DBG | skip adding static IP to network mk-no-preload-584179 - found existing host DHCP lease matching {name: "no-preload-584179", mac: "52:54:00:94:5e:a7", ip: "192.168.50.169"}
	I1210 01:09:14.441847  132605 main.go:141] libmachine: (no-preload-584179) Reserved static IP address: 192.168.50.169
	I1210 01:09:14.441867  132605 main.go:141] libmachine: (no-preload-584179) Waiting for SSH to be available...
	I1210 01:09:14.441882  132605 main.go:141] libmachine: (no-preload-584179) DBG | Getting to WaitForSSH function...
	I1210 01:09:14.444063  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.444360  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.444397  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.444510  132605 main.go:141] libmachine: (no-preload-584179) DBG | Using SSH client type: external
	I1210 01:09:14.444531  132605 main.go:141] libmachine: (no-preload-584179) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa (-rw-------)
	I1210 01:09:14.444565  132605 main.go:141] libmachine: (no-preload-584179) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:09:14.444579  132605 main.go:141] libmachine: (no-preload-584179) DBG | About to run SSH command:
	I1210 01:09:14.444594  132605 main.go:141] libmachine: (no-preload-584179) DBG | exit 0
	I1210 01:09:14.571597  132605 main.go:141] libmachine: (no-preload-584179) DBG | SSH cmd err, output: <nil>: 
	I1210 01:09:14.571997  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetConfigRaw
	I1210 01:09:14.572831  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:14.576075  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.576525  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.576559  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.576843  132605 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/config.json ...
	I1210 01:09:14.577023  132605 machine.go:93] provisionDockerMachine start ...
	I1210 01:09:14.577043  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:14.577263  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.579535  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.579894  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.579925  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.580191  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:14.580426  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.580579  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.580742  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:14.580901  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:14.581081  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:14.581092  132605 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:09:14.699453  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:09:14.699485  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:09:14.699734  132605 buildroot.go:166] provisioning hostname "no-preload-584179"
	I1210 01:09:14.699766  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:09:14.700011  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.703169  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.703570  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.703597  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.703742  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:14.703967  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.704170  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.704395  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:14.704582  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:14.704802  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:14.704825  132605 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-584179 && echo "no-preload-584179" | sudo tee /etc/hostname
	I1210 01:09:14.836216  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-584179
	
	I1210 01:09:14.836259  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.839077  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.839502  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.839536  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.839752  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:14.839958  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.840127  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.840304  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:14.840534  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:14.840766  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:14.840793  132605 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-584179' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-584179/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-584179' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:09:14.965138  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:09:14.965175  132605 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:09:14.965246  132605 buildroot.go:174] setting up certificates
	I1210 01:09:14.965268  132605 provision.go:84] configureAuth start
	I1210 01:09:14.965287  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:09:14.965570  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:14.968666  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.969081  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.969116  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.969264  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.971772  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.972144  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.972169  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.972337  132605 provision.go:143] copyHostCerts
	I1210 01:09:14.972403  132605 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:09:14.972428  132605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:09:14.972492  132605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:09:14.972648  132605 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:09:14.972663  132605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:09:14.972698  132605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:09:14.972790  132605 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:09:14.972803  132605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:09:14.972836  132605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:09:14.972915  132605 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.no-preload-584179 san=[127.0.0.1 192.168.50.169 localhost minikube no-preload-584179]
	I1210 01:09:15.113000  132605 provision.go:177] copyRemoteCerts
	I1210 01:09:15.113067  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:09:15.113100  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.115838  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.116216  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.116243  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.116422  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.116590  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.116726  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.116820  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.199896  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:09:15.225440  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 01:09:15.250028  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 01:09:15.274086  132605 provision.go:87] duration metric: took 308.801497ms to configureAuth
	I1210 01:09:15.274127  132605 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:09:15.274298  132605 config.go:182] Loaded profile config "no-preload-584179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:09:15.274390  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.277149  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.277509  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.277539  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.277682  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.277842  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.277999  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.278110  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.278260  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:15.278438  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:15.278454  132605 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:09:15.504997  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:09:15.505080  132605 machine.go:96] duration metric: took 928.040946ms to provisionDockerMachine
	I1210 01:09:15.505103  132605 start.go:293] postStartSetup for "no-preload-584179" (driver="kvm2")
	I1210 01:09:15.505118  132605 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:09:15.505150  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.505498  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:09:15.505532  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.508802  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.509247  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.509324  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.509448  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.509674  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.509840  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.509985  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.597115  132605 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:09:15.602107  132605 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:09:15.602135  132605 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:09:15.602226  132605 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:09:15.602330  132605 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:09:15.602453  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:09:15.611320  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:09:15.633173  132605 start.go:296] duration metric: took 128.055577ms for postStartSetup
	I1210 01:09:15.633214  132605 fix.go:56] duration metric: took 19.474291224s for fixHost
	I1210 01:09:15.633234  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.635888  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.636254  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.636298  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.636472  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.636655  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.636827  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.636941  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.637115  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:15.637284  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:15.637295  132605 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:09:15.746834  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792955.705138377
	
	I1210 01:09:15.746862  132605 fix.go:216] guest clock: 1733792955.705138377
	I1210 01:09:15.746873  132605 fix.go:229] Guest: 2024-12-10 01:09:15.705138377 +0000 UTC Remote: 2024-12-10 01:09:15.6332178 +0000 UTC m=+353.450037611 (delta=71.920577ms)
	I1210 01:09:15.746899  132605 fix.go:200] guest clock delta is within tolerance: 71.920577ms
	I1210 01:09:15.746915  132605 start.go:83] releasing machines lock for "no-preload-584179", held for 19.588029336s
	I1210 01:09:15.746945  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.747285  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:15.750451  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.750900  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.750929  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.751162  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.751698  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.751882  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.751964  132605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:09:15.752035  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.752082  132605 ssh_runner.go:195] Run: cat /version.json
	I1210 01:09:15.752104  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.754825  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755065  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755249  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.755269  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755457  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.755549  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.755585  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755624  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.755718  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.755807  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.755929  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.755997  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.756266  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.756431  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.834820  132605 ssh_runner.go:195] Run: systemctl --version
	I1210 01:09:15.859263  132605 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:09:16.006149  132605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:09:16.012040  132605 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:09:16.012116  132605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:09:16.026410  132605 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:09:16.026435  132605 start.go:495] detecting cgroup driver to use...
	I1210 01:09:16.026508  132605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:09:16.040833  132605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:09:16.053355  132605 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:09:16.053404  132605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:09:16.066169  132605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:09:16.078906  132605 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:09:16.183645  132605 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:09:16.338131  132605 docker.go:233] disabling docker service ...
	I1210 01:09:16.338210  132605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:09:16.353706  132605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:09:16.367025  132605 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:09:16.490857  132605 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:09:16.599213  132605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:09:16.612423  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:09:16.628989  132605 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 01:09:16.629051  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.638381  132605 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:09:16.638443  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.648140  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.657702  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.667303  132605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:09:16.677058  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.686261  132605 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.701267  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.710630  132605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:09:16.719338  132605 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:09:16.719399  132605 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:09:16.730675  132605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:09:16.739704  132605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:16.855267  132605 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:09:16.945551  132605 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:09:16.945636  132605 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:09:16.950041  132605 start.go:563] Will wait 60s for crictl version
	I1210 01:09:16.950089  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:16.953415  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:09:16.986363  132605 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:09:16.986452  132605 ssh_runner.go:195] Run: crio --version
	I1210 01:09:17.013313  132605 ssh_runner.go:195] Run: crio --version
	I1210 01:09:17.040732  132605 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 01:09:17.042078  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:17.044697  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:17.044992  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:17.045017  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:17.045180  132605 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1210 01:09:17.048776  132605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:09:17.059862  132605 kubeadm.go:883] updating cluster {Name:no-preload-584179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-584179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:09:17.059969  132605 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:09:17.060002  132605 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:09:17.092954  132605 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 01:09:17.092981  132605 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 01:09:17.093021  132605 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:17.093063  132605 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.093076  132605 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.093096  132605 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1210 01:09:17.093157  132605 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.093084  132605 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.093235  132605 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.093250  132605 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.094742  132605 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1210 01:09:17.094787  132605 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.094804  132605 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.094742  132605 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.094810  132605 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:17.094753  132605 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.094820  132605 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.094765  132605 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:14.765671  133282 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:15.750454  133282 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:15.750473  133282 pod_ready.go:82] duration metric: took 5.507439947s for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:15.750486  133282 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:14.759976  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:15.259717  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:15.760410  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:16.260034  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:16.759708  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:17.260433  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:17.760687  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:18.260284  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:18.760557  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:19.260362  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:18.290233  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:20.291198  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:17.246846  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.248658  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.250095  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.254067  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.256089  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.278344  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1210 01:09:17.278473  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.369439  132605 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1210 01:09:17.369501  132605 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.369501  132605 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1210 01:09:17.369540  132605 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.369553  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.369604  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.417953  132605 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1210 01:09:17.418006  132605 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.418052  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.423233  132605 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1210 01:09:17.423274  132605 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1210 01:09:17.423281  132605 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.423306  132605 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.423326  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.423352  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.423429  132605 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1210 01:09:17.423469  132605 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.423503  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.505918  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.505973  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.505933  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.506033  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.506057  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.506093  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.622808  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.635839  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.637443  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.637478  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.637587  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.637611  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.688747  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.768097  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.768175  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.768211  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.768320  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.768313  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.805141  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1210 01:09:17.805252  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1210 01:09:17.885468  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1210 01:09:17.885628  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1210 01:09:17.893263  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1210 01:09:17.893312  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1210 01:09:17.893335  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1210 01:09:17.893381  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 01:09:17.893399  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1210 01:09:17.893411  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 01:09:17.893417  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 01:09:17.893464  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1210 01:09:17.893479  132605 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1210 01:09:17.893454  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 01:09:17.893518  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1210 01:09:17.895148  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1210 01:09:18.009923  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:21.497870  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.604325674s)
	I1210 01:09:21.497908  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1210 01:09:21.497931  132605 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1210 01:09:21.497925  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (3.604515411s)
	I1210 01:09:21.497964  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2: (3.604523853s)
	I1210 01:09:21.497980  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1210 01:09:21.497988  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1210 01:09:21.497968  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1210 01:09:21.498030  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (3.604504871s)
	I1210 01:09:21.498065  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1210 01:09:21.498092  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (3.604626001s)
	I1210 01:09:21.498135  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1210 01:09:21.498137  132605 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.48818734s)
	I1210 01:09:21.498180  132605 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 01:09:21.498210  132605 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:21.498262  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.758044  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:20.257446  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:19.759901  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:20.260224  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:20.760523  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:21.259846  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:21.759997  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:22.259939  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:22.760414  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:23.260359  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:23.760075  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:24.260519  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:22.291428  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:24.291612  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:26.791400  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:23.369885  132605 ssh_runner.go:235] Completed: which crictl: (1.871582184s)
	I1210 01:09:23.369947  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.871938064s)
	I1210 01:09:23.369967  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1210 01:09:23.369976  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:23.370000  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 01:09:23.370042  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 01:09:25.661942  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.291860829s)
	I1210 01:09:25.661984  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1210 01:09:25.661990  132605 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.291995779s)
	I1210 01:09:25.662011  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 01:09:25.662066  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 01:09:25.662066  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:27.025354  132605 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.36318975s)
	I1210 01:09:27.025446  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:27.025517  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.363423006s)
	I1210 01:09:27.025546  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1210 01:09:27.025604  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 01:09:27.025677  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 01:09:27.063571  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 01:09:27.063700  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 01:09:22.757215  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:24.757584  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:27.256535  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:24.760537  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:25.259994  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:25.760205  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:26.260504  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:26.759648  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:27.259995  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:27.760383  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:28.259992  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:28.760004  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:29.260496  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:28.813963  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:30.837175  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:29.106253  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.080542846s)
	I1210 01:09:29.106295  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1210 01:09:29.106312  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.042586527s)
	I1210 01:09:29.106326  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 01:09:29.106345  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1210 01:09:29.106392  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 01:09:30.968622  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.862203504s)
	I1210 01:09:30.968650  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1210 01:09:30.968679  132605 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 01:09:30.968732  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 01:09:31.612519  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 01:09:31.612559  132605 cache_images.go:123] Successfully loaded all cached images
	I1210 01:09:31.612564  132605 cache_images.go:92] duration metric: took 14.519573158s to LoadCachedImages
	I1210 01:09:31.612577  132605 kubeadm.go:934] updating node { 192.168.50.169 8443 v1.31.2 crio true true} ...
	I1210 01:09:31.612686  132605 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-584179 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-584179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:09:31.612750  132605 ssh_runner.go:195] Run: crio config
	I1210 01:09:31.661155  132605 cni.go:84] Creating CNI manager for ""
	I1210 01:09:31.661185  132605 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:31.661199  132605 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:09:31.661228  132605 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.169 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-584179 NodeName:no-preload-584179 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.169"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.169 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 01:09:31.661406  132605 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.169
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-584179"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.169"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.169"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:09:31.661511  132605 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 01:09:31.671185  132605 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:09:31.671259  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:09:31.679776  132605 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1210 01:09:31.694290  132605 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:09:31.708644  132605 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1210 01:09:31.725292  132605 ssh_runner.go:195] Run: grep 192.168.50.169	control-plane.minikube.internal$ /etc/hosts
	I1210 01:09:31.729070  132605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.169	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:09:31.740077  132605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:31.857074  132605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:09:31.872257  132605 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179 for IP: 192.168.50.169
	I1210 01:09:31.872280  132605 certs.go:194] generating shared ca certs ...
	I1210 01:09:31.872314  132605 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:31.872515  132605 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:09:31.872569  132605 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:09:31.872579  132605 certs.go:256] generating profile certs ...
	I1210 01:09:31.872694  132605 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/client.key
	I1210 01:09:31.872775  132605 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/apiserver.key.0a939830
	I1210 01:09:31.872828  132605 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/proxy-client.key
	I1210 01:09:31.872979  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:09:31.873020  132605 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:09:31.873034  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:09:31.873069  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:09:31.873098  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:09:31.873127  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:09:31.873188  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:09:31.874099  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:09:31.906792  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:09:31.939994  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:09:31.965628  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:09:31.992020  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 01:09:32.015601  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 01:09:32.048113  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:09:32.069416  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 01:09:32.090144  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:09:32.111484  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:09:32.135390  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:09:32.157978  132605 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:09:32.173851  132605 ssh_runner.go:195] Run: openssl version
	I1210 01:09:32.179068  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:09:32.188602  132605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:09:32.192585  132605 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:09:32.192629  132605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:09:32.197637  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:09:32.207401  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:09:32.216700  132605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:29.756368  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:31.756948  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:29.760244  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:30.260534  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:30.760426  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:31.259767  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:31.759951  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:32.259919  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:32.760161  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:33.260272  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:33.759885  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:34.259970  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:33.290818  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:35.790889  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:32.220620  132605 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:32.220663  132605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:32.225661  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:09:32.235325  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:09:32.244746  132605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:09:32.248733  132605 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:09:32.248774  132605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:09:32.254022  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:09:32.264208  132605 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:09:32.268332  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:09:32.273902  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:09:32.279525  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:09:32.284958  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:09:32.291412  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:09:32.296527  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:09:32.302123  132605 kubeadm.go:392] StartCluster: {Name:no-preload-584179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-584179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:09:32.302233  132605 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:09:32.302293  132605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:32.345135  132605 cri.go:89] found id: ""
	I1210 01:09:32.345212  132605 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:09:32.355077  132605 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:09:32.355093  132605 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:09:32.355131  132605 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:09:32.364021  132605 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:09:32.365012  132605 kubeconfig.go:125] found "no-preload-584179" server: "https://192.168.50.169:8443"
	I1210 01:09:32.367348  132605 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:09:32.375938  132605 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.169
	I1210 01:09:32.375967  132605 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:09:32.375979  132605 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:09:32.376032  132605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:32.408948  132605 cri.go:89] found id: ""
	I1210 01:09:32.409014  132605 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:09:32.427628  132605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:09:32.437321  132605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:09:32.437348  132605 kubeadm.go:157] found existing configuration files:
	
	I1210 01:09:32.437391  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:09:32.446114  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:09:32.446155  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:09:32.455531  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:09:32.465558  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:09:32.465611  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:09:32.475265  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:09:32.483703  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:09:32.483750  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:09:32.492041  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:09:32.499895  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:09:32.499948  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:09:32.508205  132605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:09:32.516625  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:32.628252  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:33.675979  132605 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.04768244s)
	I1210 01:09:33.676029  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:33.873465  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:33.951722  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:34.064512  132605 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:09:34.064627  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:34.565753  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.065163  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.104915  132605 api_server.go:72] duration metric: took 1.040405424s to wait for apiserver process to appear ...
	I1210 01:09:35.104944  132605 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:09:35.104970  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:35.105426  132605 api_server.go:269] stopped: https://192.168.50.169:8443/healthz: Get "https://192.168.50.169:8443/healthz": dial tcp 192.168.50.169:8443: connect: connection refused
	I1210 01:09:35.606063  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:34.256982  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:36.756184  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:38.326687  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:38.326719  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:38.326736  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:38.400207  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:38.400236  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:38.605572  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:38.610811  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:38.610849  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:39.105424  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:39.117268  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:39.117303  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:39.605417  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:39.614444  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 200:
	ok
	I1210 01:09:39.620993  132605 api_server.go:141] control plane version: v1.31.2
	I1210 01:09:39.621020  132605 api_server.go:131] duration metric: took 4.51606815s to wait for apiserver health ...
	I1210 01:09:39.621032  132605 cni.go:84] Creating CNI manager for ""
	I1210 01:09:39.621041  132605 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:34.759835  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.260276  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.759791  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:36.259684  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:36.760649  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:37.259922  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:37.760558  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:38.260712  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:38.759679  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:39.259678  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:39.622539  132605 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:09:39.623685  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:09:39.643844  132605 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:09:39.678622  132605 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:09:39.692082  132605 system_pods.go:59] 8 kube-system pods found
	I1210 01:09:39.692124  132605 system_pods.go:61] "coredns-7c65d6cfc9-hhsm5" [dddb227a-7c16-4acd-be5f-1ab38b78129c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 01:09:39.692133  132605 system_pods.go:61] "etcd-no-preload-584179" [acba2a13-196f-4ff9-8151-6b88578d532d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 01:09:39.692141  132605 system_pods.go:61] "kube-apiserver-no-preload-584179" [20a67076-4ff2-4f31-b245-bf4079cd11d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 01:09:39.692149  132605 system_pods.go:61] "kube-controller-manager-no-preload-584179" [d7a2e531-1609-4c8a-b756-d9819339ed27] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 01:09:39.692154  132605 system_pods.go:61] "kube-proxy-xcjs2" [ec6cf5b1-3ea9-4868-874d-61e262cca0c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 01:09:39.692162  132605 system_pods.go:61] "kube-scheduler-no-preload-584179" [998543da-3056-4960-8762-9ab3dbd1925a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 01:09:39.692174  132605 system_pods.go:61] "metrics-server-6867b74b74-lwgxd" [0e7f1063-8508-4f5b-b8ff-bbd387a53919] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:09:39.692183  132605 system_pods.go:61] "storage-provisioner" [31180637-f48e-4dda-8ec3-56155bb300cf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 01:09:39.692200  132605 system_pods.go:74] duration metric: took 13.548523ms to wait for pod list to return data ...
	I1210 01:09:39.692214  132605 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:09:39.696707  132605 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:09:39.696740  132605 node_conditions.go:123] node cpu capacity is 2
	I1210 01:09:39.696754  132605 node_conditions.go:105] duration metric: took 4.534393ms to run NodePressure ...
	I1210 01:09:39.696781  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:39.977595  132605 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 01:09:39.981694  132605 kubeadm.go:739] kubelet initialised
	I1210 01:09:39.981714  132605 kubeadm.go:740] duration metric: took 4.094235ms waiting for restarted kubelet to initialise ...
	I1210 01:09:39.981724  132605 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:39.987484  132605 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:39.992414  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.992434  132605 pod_ready.go:82] duration metric: took 4.925954ms for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:39.992442  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.992448  132605 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:39.996262  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "etcd-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.996291  132605 pod_ready.go:82] duration metric: took 3.826925ms for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:39.996301  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "etcd-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.996309  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.000642  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-apiserver-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.000659  132605 pod_ready.go:82] duration metric: took 4.340955ms for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.000668  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-apiserver-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.000676  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.082165  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.082191  132605 pod_ready.go:82] duration metric: took 81.505218ms for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.082204  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.082214  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.483273  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-proxy-xcjs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.483306  132605 pod_ready.go:82] duration metric: took 401.082947ms for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.483318  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-proxy-xcjs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.483329  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.882587  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-scheduler-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.882617  132605 pod_ready.go:82] duration metric: took 399.278598ms for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.882629  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-scheduler-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.882641  132605 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:41.281474  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:41.281502  132605 pod_ready.go:82] duration metric: took 398.850415ms for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:41.281516  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:41.281526  132605 pod_ready.go:39] duration metric: took 1.299793175s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:41.281547  132605 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 01:09:41.293293  132605 ops.go:34] apiserver oom_adj: -16
	I1210 01:09:41.293310  132605 kubeadm.go:597] duration metric: took 8.938211553s to restartPrimaryControlPlane
	I1210 01:09:41.293318  132605 kubeadm.go:394] duration metric: took 8.991203373s to StartCluster
	I1210 01:09:41.293334  132605 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:41.293389  132605 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:09:41.295054  132605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:41.295293  132605 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 01:09:41.295376  132605 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 01:09:41.295496  132605 addons.go:69] Setting storage-provisioner=true in profile "no-preload-584179"
	I1210 01:09:41.295519  132605 addons.go:234] Setting addon storage-provisioner=true in "no-preload-584179"
	W1210 01:09:41.295529  132605 addons.go:243] addon storage-provisioner should already be in state true
	I1210 01:09:41.295527  132605 config.go:182] Loaded profile config "no-preload-584179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:09:41.295581  132605 host.go:66] Checking if "no-preload-584179" exists ...
	I1210 01:09:41.295588  132605 addons.go:69] Setting metrics-server=true in profile "no-preload-584179"
	I1210 01:09:41.295602  132605 addons.go:234] Setting addon metrics-server=true in "no-preload-584179"
	I1210 01:09:41.295604  132605 addons.go:69] Setting default-storageclass=true in profile "no-preload-584179"
	W1210 01:09:41.295615  132605 addons.go:243] addon metrics-server should already be in state true
	I1210 01:09:41.295627  132605 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-584179"
	I1210 01:09:41.295643  132605 host.go:66] Checking if "no-preload-584179" exists ...
	I1210 01:09:41.295906  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.295951  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.296035  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.296052  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.296089  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.296134  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.296994  132605 out.go:177] * Verifying Kubernetes components...
	I1210 01:09:41.298351  132605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:41.312841  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32935
	I1210 01:09:41.313326  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.313883  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.313906  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.314202  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.314798  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.314846  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.316718  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45347
	I1210 01:09:41.317263  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.317829  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.317857  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.318269  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.318870  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.318916  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.329929  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45969
	I1210 01:09:41.330341  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.330879  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.330894  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.331331  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.331505  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.332041  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43911
	I1210 01:09:41.332457  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.333084  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.333107  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.333516  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.333728  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.335268  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42073
	I1210 01:09:41.336123  132605 addons.go:234] Setting addon default-storageclass=true in "no-preload-584179"
	W1210 01:09:41.336137  132605 addons.go:243] addon default-storageclass should already be in state true
	I1210 01:09:41.336161  132605 host.go:66] Checking if "no-preload-584179" exists ...
	I1210 01:09:41.336395  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.336422  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.336596  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.336686  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:41.337074  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.337088  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.337468  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.337656  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.338494  132605 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:41.339130  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:41.339843  132605 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:09:41.339856  132605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 01:09:41.339870  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:41.341253  132605 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 01:09:37.793895  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:40.291282  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:41.342436  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.342604  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 01:09:41.342620  132605 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 01:09:41.342633  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:41.342844  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:41.342861  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.343122  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:41.343399  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:41.343569  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:41.343683  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:41.345344  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.345814  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:41.345834  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.345982  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:41.346159  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:41.346293  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:41.346431  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:41.352593  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38283
	I1210 01:09:41.352930  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.353292  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.353307  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.353545  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.354016  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.354045  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.370168  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46765
	I1210 01:09:41.370736  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.371289  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.371315  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.371670  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.371879  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.373679  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:41.374802  132605 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 01:09:41.374821  132605 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 01:09:41.374841  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:41.377611  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.378065  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:41.378089  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.378261  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:41.378411  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:41.378571  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:41.378711  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:41.492956  132605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:09:41.510713  132605 node_ready.go:35] waiting up to 6m0s for node "no-preload-584179" to be "Ready" ...
	I1210 01:09:41.591523  132605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:09:41.612369  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 01:09:41.612393  132605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 01:09:41.641040  132605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 01:09:41.672955  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 01:09:41.672982  132605 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 01:09:41.720885  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:09:41.720921  132605 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 01:09:41.773885  132605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:09:39.256804  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:41.758321  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:42.945125  132605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.304042618s)
	I1210 01:09:42.945192  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945207  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945233  132605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.171304002s)
	I1210 01:09:42.945292  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945310  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945452  132605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.353900883s)
	I1210 01:09:42.945476  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945488  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945543  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.945556  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.945587  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.945601  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.945609  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945616  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945819  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.945847  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.945832  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.945856  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945863  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945897  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.945907  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.945916  132605 addons.go:475] Verifying addon metrics-server=true in "no-preload-584179"
	I1210 01:09:42.945926  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.946083  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.946115  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.946120  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.946659  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.946679  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.946690  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.946699  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.946960  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.946976  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.954783  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.954805  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.955037  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.955056  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.955101  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.956592  132605 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1210 01:09:39.759613  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:40.260466  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:40.760527  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:41.260450  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:41.759950  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:42.260075  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:42.760661  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:43.259780  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:43.759690  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:44.260376  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:42.791249  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:45.290804  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:42.957891  132605 addons.go:510] duration metric: took 1.66252058s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I1210 01:09:43.514278  132605 node_ready.go:53] node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:45.514855  132605 node_ready.go:53] node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:44.256730  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:46.257699  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:44.759802  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:45.260533  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:45.760410  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:45.760500  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:45.797499  133241 cri.go:89] found id: ""
	I1210 01:09:45.797522  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.797533  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:45.797539  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:45.797596  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:45.827841  133241 cri.go:89] found id: ""
	I1210 01:09:45.827872  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.827885  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:45.827893  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:45.827952  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:45.861227  133241 cri.go:89] found id: ""
	I1210 01:09:45.861251  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.861259  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:45.861264  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:45.861331  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:45.895142  133241 cri.go:89] found id: ""
	I1210 01:09:45.895174  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.895185  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:45.895191  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:45.895266  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:45.931113  133241 cri.go:89] found id: ""
	I1210 01:09:45.931146  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.931157  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:45.931164  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:45.931251  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:45.964348  133241 cri.go:89] found id: ""
	I1210 01:09:45.964388  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.964396  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:45.964402  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:45.964453  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:45.997808  133241 cri.go:89] found id: ""
	I1210 01:09:45.997829  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.997837  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:45.997842  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:45.997888  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:46.028464  133241 cri.go:89] found id: ""
	I1210 01:09:46.028490  133241 logs.go:282] 0 containers: []
	W1210 01:09:46.028499  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:46.028508  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:46.028524  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:46.136225  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:46.136257  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:46.136275  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:46.211654  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:46.211686  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:46.254008  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:46.254046  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:46.305985  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:46.306020  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:48.818889  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:48.831511  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:48.831575  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:48.863536  133241 cri.go:89] found id: ""
	I1210 01:09:48.863566  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.863577  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:48.863585  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:48.863642  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:48.895340  133241 cri.go:89] found id: ""
	I1210 01:09:48.895362  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.895371  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:48.895378  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:48.895439  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:48.930962  133241 cri.go:89] found id: ""
	I1210 01:09:48.930989  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.930997  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:48.931003  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:48.931060  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:48.966437  133241 cri.go:89] found id: ""
	I1210 01:09:48.966467  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.966479  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:48.966488  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:48.966553  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:49.001290  133241 cri.go:89] found id: ""
	I1210 01:09:49.001321  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.001333  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:49.001340  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:49.001404  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:49.036472  133241 cri.go:89] found id: ""
	I1210 01:09:49.036499  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.036510  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:49.036532  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:49.036609  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:49.066550  133241 cri.go:89] found id: ""
	I1210 01:09:49.066589  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.066600  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:49.066607  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:49.066669  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:49.097358  133241 cri.go:89] found id: ""
	I1210 01:09:49.097383  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.097392  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:49.097402  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:49.097413  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:49.170082  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:49.170116  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:49.209684  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:49.209747  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:49.268714  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:49.268755  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:49.281979  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:49.282014  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:49.350901  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:47.790228  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:49.791158  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:48.014087  132605 node_ready.go:53] node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:49.014932  132605 node_ready.go:49] node "no-preload-584179" has status "Ready":"True"
	I1210 01:09:49.014960  132605 node_ready.go:38] duration metric: took 7.504211405s for node "no-preload-584179" to be "Ready" ...
	I1210 01:09:49.014974  132605 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:49.020519  132605 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:49.025466  132605 pod_ready.go:93] pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:49.025489  132605 pod_ready.go:82] duration metric: took 4.945455ms for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:49.025501  132605 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.031580  132605 pod_ready.go:103] pod "etcd-no-preload-584179" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:51.532544  132605 pod_ready.go:93] pod "etcd-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.532570  132605 pod_ready.go:82] duration metric: took 2.507060173s for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.532582  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.537498  132605 pod_ready.go:93] pod "kube-apiserver-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.537516  132605 pod_ready.go:82] duration metric: took 4.927374ms for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.537525  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.542147  132605 pod_ready.go:93] pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.542161  132605 pod_ready.go:82] duration metric: took 4.630752ms for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.542169  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.546645  132605 pod_ready.go:93] pod "kube-proxy-xcjs2" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.546660  132605 pod_ready.go:82] duration metric: took 4.486291ms for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.546667  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.815308  132605 pod_ready.go:93] pod "kube-scheduler-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.815333  132605 pod_ready.go:82] duration metric: took 268.661005ms for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.815343  132605 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:48.756571  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:51.256434  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:51.851559  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:51.864804  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:51.864862  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:51.907102  133241 cri.go:89] found id: ""
	I1210 01:09:51.907141  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.907154  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:51.907162  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:51.907218  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:51.937672  133241 cri.go:89] found id: ""
	I1210 01:09:51.937695  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.937702  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:51.937708  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:51.937755  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:51.966886  133241 cri.go:89] found id: ""
	I1210 01:09:51.966911  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.966919  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:51.966925  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:51.966981  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:51.996806  133241 cri.go:89] found id: ""
	I1210 01:09:51.996830  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.996838  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:51.996844  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:51.996901  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:52.028041  133241 cri.go:89] found id: ""
	I1210 01:09:52.028083  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.028091  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:52.028097  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:52.028150  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:52.057921  133241 cri.go:89] found id: ""
	I1210 01:09:52.057946  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.057954  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:52.057960  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:52.058010  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:52.088367  133241 cri.go:89] found id: ""
	I1210 01:09:52.088406  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.088415  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:52.088422  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:52.088487  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:52.117636  133241 cri.go:89] found id: ""
	I1210 01:09:52.117667  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.117679  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:52.117691  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:52.117705  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:52.151628  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:52.151655  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:52.202083  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:52.202116  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:52.214973  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:52.215009  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:52.282101  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:52.282126  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:52.282139  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:52.290617  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:54.790008  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:56.790504  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:53.820512  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:55.824852  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:53.258005  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:55.755992  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:54.862326  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:54.874349  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:54.874418  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:54.906983  133241 cri.go:89] found id: ""
	I1210 01:09:54.907006  133241 logs.go:282] 0 containers: []
	W1210 01:09:54.907013  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:54.907019  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:54.907069  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:54.938187  133241 cri.go:89] found id: ""
	I1210 01:09:54.938213  133241 logs.go:282] 0 containers: []
	W1210 01:09:54.938221  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:54.938226  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:54.938290  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:54.974481  133241 cri.go:89] found id: ""
	I1210 01:09:54.974514  133241 logs.go:282] 0 containers: []
	W1210 01:09:54.974526  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:54.974534  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:54.974619  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:55.005904  133241 cri.go:89] found id: ""
	I1210 01:09:55.005928  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.005941  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:55.005949  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:55.006015  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:55.037698  133241 cri.go:89] found id: ""
	I1210 01:09:55.037729  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.037741  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:55.037748  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:55.037816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:55.067926  133241 cri.go:89] found id: ""
	I1210 01:09:55.067958  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.067966  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:55.067971  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:55.068016  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:55.098309  133241 cri.go:89] found id: ""
	I1210 01:09:55.098333  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.098341  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:55.098349  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:55.098400  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:55.145177  133241 cri.go:89] found id: ""
	I1210 01:09:55.145212  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.145221  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:55.145231  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:55.145243  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:55.193307  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:55.193338  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:55.205536  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:55.205558  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:55.271248  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:55.271276  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:55.271295  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:55.349465  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:55.349503  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:57.887749  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:57.899698  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:57.899765  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:57.933170  133241 cri.go:89] found id: ""
	I1210 01:09:57.933196  133241 logs.go:282] 0 containers: []
	W1210 01:09:57.933206  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:57.933214  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:57.933282  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:57.964237  133241 cri.go:89] found id: ""
	I1210 01:09:57.964271  133241 logs.go:282] 0 containers: []
	W1210 01:09:57.964284  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:57.964292  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:57.964360  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:57.996447  133241 cri.go:89] found id: ""
	I1210 01:09:57.996481  133241 logs.go:282] 0 containers: []
	W1210 01:09:57.996493  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:57.996501  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:57.996562  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:58.030007  133241 cri.go:89] found id: ""
	I1210 01:09:58.030034  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.030046  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:58.030054  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:58.030120  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:58.063634  133241 cri.go:89] found id: ""
	I1210 01:09:58.063667  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.063678  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:58.063686  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:58.063748  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:58.095076  133241 cri.go:89] found id: ""
	I1210 01:09:58.095105  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.095114  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:58.095120  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:58.095177  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:58.127107  133241 cri.go:89] found id: ""
	I1210 01:09:58.127147  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.127160  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:58.127169  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:58.127243  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:58.158137  133241 cri.go:89] found id: ""
	I1210 01:09:58.158167  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.158177  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:58.158190  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:58.158213  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:58.209195  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:58.209236  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:58.221816  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:58.221841  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:58.290396  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:58.290416  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:58.290430  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:58.370235  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:58.370265  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:58.791561  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:01.290503  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:58.321571  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:00.322349  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:58.256526  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:00.756754  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:00.908076  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:00.920898  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:00.920985  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:00.955432  133241 cri.go:89] found id: ""
	I1210 01:10:00.955469  133241 logs.go:282] 0 containers: []
	W1210 01:10:00.955481  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:00.955490  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:00.955550  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:00.987580  133241 cri.go:89] found id: ""
	I1210 01:10:00.987606  133241 logs.go:282] 0 containers: []
	W1210 01:10:00.987615  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:00.987621  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:00.987670  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:01.018741  133241 cri.go:89] found id: ""
	I1210 01:10:01.018766  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.018773  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:01.018781  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:01.018840  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:01.049817  133241 cri.go:89] found id: ""
	I1210 01:10:01.049849  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.049860  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:01.049879  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:01.049946  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:01.081736  133241 cri.go:89] found id: ""
	I1210 01:10:01.081765  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.081775  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:01.081781  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:01.081829  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:01.110990  133241 cri.go:89] found id: ""
	I1210 01:10:01.111015  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.111026  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:01.111034  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:01.111096  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:01.140737  133241 cri.go:89] found id: ""
	I1210 01:10:01.140767  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.140777  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:01.140785  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:01.140848  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:01.170628  133241 cri.go:89] found id: ""
	I1210 01:10:01.170662  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.170674  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:01.170686  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:01.170701  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:01.222358  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:01.222389  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:01.235640  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:01.235668  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:01.302726  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:01.302745  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:01.302762  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:01.383817  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:01.383855  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:03.921112  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:03.933517  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:03.933592  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:03.967318  133241 cri.go:89] found id: ""
	I1210 01:10:03.967344  133241 logs.go:282] 0 containers: []
	W1210 01:10:03.967353  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:03.967358  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:03.967411  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:03.998743  133241 cri.go:89] found id: ""
	I1210 01:10:03.998768  133241 logs.go:282] 0 containers: []
	W1210 01:10:03.998776  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:03.998782  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:03.998842  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:04.033209  133241 cri.go:89] found id: ""
	I1210 01:10:04.033235  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.033247  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:04.033255  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:04.033319  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:04.064815  133241 cri.go:89] found id: ""
	I1210 01:10:04.064845  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.064857  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:04.064864  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:04.064921  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:04.098676  133241 cri.go:89] found id: ""
	I1210 01:10:04.098699  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.098707  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:04.098712  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:04.098763  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:04.129693  133241 cri.go:89] found id: ""
	I1210 01:10:04.129720  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.129732  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:04.129741  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:04.129809  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:04.162158  133241 cri.go:89] found id: ""
	I1210 01:10:04.162195  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.162203  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:04.162209  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:04.162276  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:04.194376  133241 cri.go:89] found id: ""
	I1210 01:10:04.194425  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.194436  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:04.194446  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:04.194462  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:04.246674  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:04.246702  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:04.259142  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:04.259169  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:04.330034  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:04.330054  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:04.330067  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:04.410042  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:04.410089  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:03.790690  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:06.290723  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:02.821628  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:04.822691  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:06.823821  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:03.256410  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:05.756520  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:06.948623  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:06.960727  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:06.960811  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:06.993176  133241 cri.go:89] found id: ""
	I1210 01:10:06.993217  133241 logs.go:282] 0 containers: []
	W1210 01:10:06.993226  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:06.993231  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:06.993285  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:07.026420  133241 cri.go:89] found id: ""
	I1210 01:10:07.026449  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.026462  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:07.026469  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:07.026541  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:07.060810  133241 cri.go:89] found id: ""
	I1210 01:10:07.060837  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.060847  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:07.060855  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:07.060921  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:07.091336  133241 cri.go:89] found id: ""
	I1210 01:10:07.091376  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.091386  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:07.091393  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:07.091510  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:07.122715  133241 cri.go:89] found id: ""
	I1210 01:10:07.122750  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.122762  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:07.122770  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:07.122822  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:07.154444  133241 cri.go:89] found id: ""
	I1210 01:10:07.154479  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.154490  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:07.154496  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:07.154575  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:07.189571  133241 cri.go:89] found id: ""
	I1210 01:10:07.189601  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.189614  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:07.189622  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:07.189683  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:07.224455  133241 cri.go:89] found id: ""
	I1210 01:10:07.224480  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.224489  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:07.224499  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:07.224512  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:07.240174  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:07.240214  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:07.344027  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:07.344062  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:07.344079  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:07.445219  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:07.445263  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:07.483205  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:07.483238  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:08.291335  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:10.789606  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:09.321098  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:11.321721  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:08.256670  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:10.256954  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:12.257117  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:10.034238  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:10.047042  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:10.047105  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:10.078622  133241 cri.go:89] found id: ""
	I1210 01:10:10.078654  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.078666  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:10.078675  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:10.078737  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:10.109353  133241 cri.go:89] found id: ""
	I1210 01:10:10.109379  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.109390  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:10.109398  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:10.109470  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:10.143036  133241 cri.go:89] found id: ""
	I1210 01:10:10.143065  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.143077  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:10.143084  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:10.143150  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:10.174938  133241 cri.go:89] found id: ""
	I1210 01:10:10.174966  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.174975  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:10.174981  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:10.175032  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:10.208680  133241 cri.go:89] found id: ""
	I1210 01:10:10.208709  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.208718  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:10.208724  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:10.208793  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:10.241153  133241 cri.go:89] found id: ""
	I1210 01:10:10.241189  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.241202  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:10.241213  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:10.241290  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:10.279405  133241 cri.go:89] found id: ""
	I1210 01:10:10.279437  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.279448  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:10.279457  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:10.279523  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:10.317915  133241 cri.go:89] found id: ""
	I1210 01:10:10.317943  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.317953  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:10.317964  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:10.317980  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:10.370920  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:10.370955  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:10.385823  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:10.385867  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:10.452746  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:10.452774  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:10.452793  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:10.535218  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:10.535291  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:13.075172  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:13.090707  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:13.090785  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:13.141780  133241 cri.go:89] found id: ""
	I1210 01:10:13.141804  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.141812  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:13.141818  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:13.141869  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:13.172241  133241 cri.go:89] found id: ""
	I1210 01:10:13.172263  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.172271  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:13.172277  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:13.172339  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:13.200378  133241 cri.go:89] found id: ""
	I1210 01:10:13.200401  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.200410  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:13.200415  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:13.200472  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:13.232921  133241 cri.go:89] found id: ""
	I1210 01:10:13.232952  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.232964  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:13.232972  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:13.233088  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:13.265305  133241 cri.go:89] found id: ""
	I1210 01:10:13.265333  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.265344  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:13.265352  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:13.265411  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:13.299192  133241 cri.go:89] found id: ""
	I1210 01:10:13.299216  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.299226  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:13.299233  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:13.299306  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:13.332156  133241 cri.go:89] found id: ""
	I1210 01:10:13.332184  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.332195  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:13.332202  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:13.332259  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:13.365450  133241 cri.go:89] found id: ""
	I1210 01:10:13.365484  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.365498  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:13.365511  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:13.365529  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:13.440807  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:13.440849  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:13.477283  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:13.477325  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:13.527481  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:13.527514  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:13.540146  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:13.540178  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:13.602711  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:12.790714  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:15.290963  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:13.820293  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:15.821845  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:14.755454  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:16.756574  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:16.103789  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:16.116124  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:16.116204  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:16.153057  133241 cri.go:89] found id: ""
	I1210 01:10:16.153082  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.153102  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:16.153109  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:16.153162  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:16.186489  133241 cri.go:89] found id: ""
	I1210 01:10:16.186517  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.186528  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:16.186535  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:16.186613  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:16.216369  133241 cri.go:89] found id: ""
	I1210 01:10:16.216404  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.216415  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:16.216423  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:16.216482  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:16.246254  133241 cri.go:89] found id: ""
	I1210 01:10:16.246282  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.246292  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:16.246299  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:16.246361  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:16.277815  133241 cri.go:89] found id: ""
	I1210 01:10:16.277844  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.277855  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:16.277866  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:16.277931  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:16.312101  133241 cri.go:89] found id: ""
	I1210 01:10:16.312132  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.312141  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:16.312147  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:16.312202  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:16.350273  133241 cri.go:89] found id: ""
	I1210 01:10:16.350299  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.350307  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:16.350313  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:16.350376  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:16.388091  133241 cri.go:89] found id: ""
	I1210 01:10:16.388113  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.388121  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:16.388130  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:16.388150  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:16.456039  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:16.456066  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:16.456085  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:16.534919  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:16.534950  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:16.581598  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:16.581639  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:16.631479  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:16.631515  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:19.143852  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:19.156229  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:19.156300  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:19.186482  133241 cri.go:89] found id: ""
	I1210 01:10:19.186506  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.186514  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:19.186521  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:19.186585  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:19.216945  133241 cri.go:89] found id: ""
	I1210 01:10:19.216967  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.216975  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:19.216983  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:19.217060  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:19.247628  133241 cri.go:89] found id: ""
	I1210 01:10:19.247656  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.247666  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:19.247672  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:19.247719  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:19.281256  133241 cri.go:89] found id: ""
	I1210 01:10:19.281287  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.281297  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:19.281303  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:19.281364  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:19.315123  133241 cri.go:89] found id: ""
	I1210 01:10:19.315156  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.315168  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:19.315176  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:19.315246  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:19.349687  133241 cri.go:89] found id: ""
	I1210 01:10:19.349714  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.349725  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:19.349733  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:19.349797  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:19.381019  133241 cri.go:89] found id: ""
	I1210 01:10:19.381046  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.381058  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:19.381065  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:19.381129  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:19.413983  133241 cri.go:89] found id: ""
	I1210 01:10:19.414023  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.414035  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:19.414048  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:19.414063  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:19.453812  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:19.453848  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:19.504016  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:19.504049  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:19.517665  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:19.517695  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:19.583777  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:19.583807  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:19.583825  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:17.790792  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:20.290934  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:17.821893  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:20.320787  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:19.256192  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:21.256740  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:22.160219  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:22.172908  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:22.172984  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:22.203634  133241 cri.go:89] found id: ""
	I1210 01:10:22.203665  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.203680  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:22.203689  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:22.203754  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:22.233632  133241 cri.go:89] found id: ""
	I1210 01:10:22.233660  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.233671  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:22.233679  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:22.233748  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:22.269679  133241 cri.go:89] found id: ""
	I1210 01:10:22.269704  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.269713  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:22.269719  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:22.269769  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:22.301819  133241 cri.go:89] found id: ""
	I1210 01:10:22.301850  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.301858  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:22.301864  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:22.301914  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:22.337435  133241 cri.go:89] found id: ""
	I1210 01:10:22.337470  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.337479  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:22.337494  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:22.337562  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:22.368920  133241 cri.go:89] found id: ""
	I1210 01:10:22.368944  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.368952  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:22.368957  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:22.369020  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:22.401157  133241 cri.go:89] found id: ""
	I1210 01:10:22.401188  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.401200  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:22.401211  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:22.401277  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:22.436278  133241 cri.go:89] found id: ""
	I1210 01:10:22.436317  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.436330  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:22.436343  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:22.436359  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:22.485320  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:22.485354  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:22.498225  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:22.498253  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:22.559918  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:22.559944  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:22.559961  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:22.636884  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:22.636919  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:22.291705  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:24.790056  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:26.790414  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:22.322051  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:24.821800  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:23.756797  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:25.757544  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:25.173302  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:25.185398  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:25.185481  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:25.215003  133241 cri.go:89] found id: ""
	I1210 01:10:25.215030  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.215038  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:25.215044  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:25.215106  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:25.247583  133241 cri.go:89] found id: ""
	I1210 01:10:25.247604  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.247613  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:25.247620  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:25.247679  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:25.282125  133241 cri.go:89] found id: ""
	I1210 01:10:25.282150  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.282158  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:25.282163  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:25.282220  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:25.317560  133241 cri.go:89] found id: ""
	I1210 01:10:25.317590  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.317599  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:25.317605  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:25.317666  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:25.354392  133241 cri.go:89] found id: ""
	I1210 01:10:25.354418  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.354430  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:25.354441  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:25.354510  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:25.392349  133241 cri.go:89] found id: ""
	I1210 01:10:25.392375  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.392384  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:25.392390  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:25.392442  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:25.429665  133241 cri.go:89] found id: ""
	I1210 01:10:25.429692  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.429702  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:25.429709  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:25.429766  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:25.466437  133241 cri.go:89] found id: ""
	I1210 01:10:25.466463  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.466476  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:25.466488  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:25.466503  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:25.480846  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:25.480885  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:25.548828  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:25.548861  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:25.548877  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:25.626942  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:25.626985  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:25.664081  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:25.664120  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:28.219032  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:28.233820  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:28.233886  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:28.267033  133241 cri.go:89] found id: ""
	I1210 01:10:28.267061  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.267072  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:28.267079  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:28.267133  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:28.304241  133241 cri.go:89] found id: ""
	I1210 01:10:28.304268  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.304276  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:28.304282  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:28.304329  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:28.339783  133241 cri.go:89] found id: ""
	I1210 01:10:28.339810  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.339817  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:28.339824  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:28.339897  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:28.371890  133241 cri.go:89] found id: ""
	I1210 01:10:28.371944  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.371957  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:28.371965  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:28.372033  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:28.409995  133241 cri.go:89] found id: ""
	I1210 01:10:28.410031  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.410042  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:28.410050  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:28.410122  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:28.443817  133241 cri.go:89] found id: ""
	I1210 01:10:28.443854  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.443866  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:28.443874  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:28.443943  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:28.476813  133241 cri.go:89] found id: ""
	I1210 01:10:28.476842  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.476850  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:28.476856  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:28.476918  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:28.509092  133241 cri.go:89] found id: ""
	I1210 01:10:28.509119  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.509129  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:28.509147  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:28.509166  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:28.582990  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:28.583021  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:28.624120  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:28.624152  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:28.673901  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:28.673942  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:28.686654  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:28.686684  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:28.754914  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:28.790925  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:31.291799  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:27.321458  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:29.820474  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:31.820865  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:28.257390  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:30.757194  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:31.256019  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:31.269297  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:31.269374  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:31.306032  133241 cri.go:89] found id: ""
	I1210 01:10:31.306063  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.306074  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:31.306082  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:31.306149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:31.339930  133241 cri.go:89] found id: ""
	I1210 01:10:31.339964  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.339976  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:31.339984  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:31.340049  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:31.371820  133241 cri.go:89] found id: ""
	I1210 01:10:31.371853  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.371865  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:31.371872  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:31.371929  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:31.406853  133241 cri.go:89] found id: ""
	I1210 01:10:31.406880  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.406888  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:31.406895  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:31.406973  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:31.441927  133241 cri.go:89] found id: ""
	I1210 01:10:31.441964  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.441983  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:31.441993  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:31.442059  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:31.475302  133241 cri.go:89] found id: ""
	I1210 01:10:31.475335  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.475347  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:31.475356  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:31.475422  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:31.508445  133241 cri.go:89] found id: ""
	I1210 01:10:31.508479  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.508489  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:31.508495  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:31.508549  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:31.542658  133241 cri.go:89] found id: ""
	I1210 01:10:31.542686  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.542694  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:31.542704  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:31.542720  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:31.591393  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:31.591432  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:31.604124  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:31.604152  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:31.670342  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:31.670381  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:31.670401  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:31.755216  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:31.755273  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:34.307218  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:34.321878  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:34.321951  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:34.355191  133241 cri.go:89] found id: ""
	I1210 01:10:34.355230  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.355238  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:34.355244  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:34.355300  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:34.392397  133241 cri.go:89] found id: ""
	I1210 01:10:34.392432  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.392445  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:34.392453  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:34.392522  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:34.424468  133241 cri.go:89] found id: ""
	I1210 01:10:34.424496  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.424513  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:34.424519  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:34.424568  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:34.456966  133241 cri.go:89] found id: ""
	I1210 01:10:34.456990  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.457000  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:34.457006  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:34.457057  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:34.491830  133241 cri.go:89] found id: ""
	I1210 01:10:34.491863  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.491874  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:34.491882  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:34.491949  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:34.523409  133241 cri.go:89] found id: ""
	I1210 01:10:34.523441  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.523455  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:34.523464  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:34.523520  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:34.555092  133241 cri.go:89] found id: ""
	I1210 01:10:34.555125  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.555136  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:34.555143  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:34.555211  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:34.585491  133241 cri.go:89] found id: ""
	I1210 01:10:34.585521  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.585530  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:34.585540  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:34.585553  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:34.598250  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:34.598281  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:10:33.790899  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:35.791148  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:34.321870  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:36.821430  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:32.757323  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:35.256735  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:37.257310  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	W1210 01:10:34.662759  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:34.662784  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:34.662797  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:34.740495  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:34.740537  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:34.777192  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:34.777231  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:37.329212  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:37.342322  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:37.342397  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:37.374083  133241 cri.go:89] found id: ""
	I1210 01:10:37.374114  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.374124  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:37.374133  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:37.374202  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:37.404838  133241 cri.go:89] found id: ""
	I1210 01:10:37.404872  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.404880  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:37.404886  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:37.404948  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:37.439471  133241 cri.go:89] found id: ""
	I1210 01:10:37.439503  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.439515  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:37.439523  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:37.439598  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:37.473725  133241 cri.go:89] found id: ""
	I1210 01:10:37.473756  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.473765  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:37.473770  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:37.473822  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:37.507449  133241 cri.go:89] found id: ""
	I1210 01:10:37.507478  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.507491  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:37.507498  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:37.507565  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:37.538432  133241 cri.go:89] found id: ""
	I1210 01:10:37.538468  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.538479  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:37.538490  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:37.538583  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:37.571690  133241 cri.go:89] found id: ""
	I1210 01:10:37.571716  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.571724  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:37.571730  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:37.571787  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:37.606988  133241 cri.go:89] found id: ""
	I1210 01:10:37.607017  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.607026  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:37.607036  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:37.607048  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:37.655260  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:37.655290  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:37.667647  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:37.667672  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:37.734898  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:37.734955  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:37.734971  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:37.823654  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:37.823690  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:37.792020  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:40.290220  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:39.323412  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:41.822486  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:39.759358  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:42.256854  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:40.361513  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:40.374995  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:40.375054  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:40.407043  133241 cri.go:89] found id: ""
	I1210 01:10:40.407077  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.407086  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:40.407091  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:40.407146  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:40.438613  133241 cri.go:89] found id: ""
	I1210 01:10:40.438644  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.438655  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:40.438663  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:40.438725  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:40.468747  133241 cri.go:89] found id: ""
	I1210 01:10:40.468781  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.468794  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:40.468801  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:40.468873  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:40.501670  133241 cri.go:89] found id: ""
	I1210 01:10:40.501700  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.501708  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:40.501714  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:40.501762  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:40.531671  133241 cri.go:89] found id: ""
	I1210 01:10:40.531694  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.531704  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:40.531712  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:40.531769  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:40.562804  133241 cri.go:89] found id: ""
	I1210 01:10:40.562827  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.562836  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:40.562847  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:40.562909  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:40.593286  133241 cri.go:89] found id: ""
	I1210 01:10:40.593309  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.593318  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:40.593323  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:40.593369  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:40.624387  133241 cri.go:89] found id: ""
	I1210 01:10:40.624424  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.624438  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:40.624452  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:40.624479  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:40.636616  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:40.636643  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:40.703044  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:40.703071  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:40.703089  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:40.782186  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:40.782220  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:40.824410  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:40.824434  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:43.377460  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:43.391624  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:43.391704  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:43.424454  133241 cri.go:89] found id: ""
	I1210 01:10:43.424489  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.424499  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:43.424505  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:43.424570  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:43.454067  133241 cri.go:89] found id: ""
	I1210 01:10:43.454094  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.454102  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:43.454108  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:43.454160  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:43.485905  133241 cri.go:89] found id: ""
	I1210 01:10:43.485938  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.485949  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:43.485956  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:43.486021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:43.516402  133241 cri.go:89] found id: ""
	I1210 01:10:43.516427  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.516435  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:43.516447  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:43.516521  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:43.549049  133241 cri.go:89] found id: ""
	I1210 01:10:43.549102  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.549114  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:43.549124  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:43.549181  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:43.582610  133241 cri.go:89] found id: ""
	I1210 01:10:43.582641  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.582652  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:43.582661  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:43.582720  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:43.614392  133241 cri.go:89] found id: ""
	I1210 01:10:43.614424  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.614435  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:43.614442  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:43.614507  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:43.646797  133241 cri.go:89] found id: ""
	I1210 01:10:43.646830  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.646842  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:43.646855  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:43.646872  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:43.682884  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:43.682921  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:43.739117  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:43.739159  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:43.754008  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:43.754047  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:43.825110  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:43.825140  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:43.825156  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:42.290697  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:44.790711  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.791942  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:44.321563  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.821954  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:44.756178  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.757399  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.401040  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:46.414417  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:46.414515  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:46.446832  133241 cri.go:89] found id: ""
	I1210 01:10:46.446861  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.446871  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:46.446879  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:46.446945  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:46.480534  133241 cri.go:89] found id: ""
	I1210 01:10:46.480566  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.480577  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:46.480584  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:46.480649  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:46.512706  133241 cri.go:89] found id: ""
	I1210 01:10:46.512735  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.512745  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:46.512752  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:46.512818  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:46.545769  133241 cri.go:89] found id: ""
	I1210 01:10:46.545803  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.545815  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:46.545823  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:46.545889  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:46.575715  133241 cri.go:89] found id: ""
	I1210 01:10:46.575750  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.575762  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:46.575769  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:46.575834  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:46.605133  133241 cri.go:89] found id: ""
	I1210 01:10:46.605164  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.605175  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:46.605183  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:46.605235  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:46.635536  133241 cri.go:89] found id: ""
	I1210 01:10:46.635571  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.635582  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:46.635589  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:46.635650  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:46.665579  133241 cri.go:89] found id: ""
	I1210 01:10:46.665608  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.665617  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:46.665627  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:46.665637  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:46.749766  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:46.749806  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:46.788690  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:46.788725  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:46.841860  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:46.841888  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:46.870621  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:46.870651  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:46.943532  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:49.444707  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:49.457003  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:49.457071  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:49.489757  133241 cri.go:89] found id: ""
	I1210 01:10:49.489791  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.489802  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:49.489809  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:49.489859  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:49.519808  133241 cri.go:89] found id: ""
	I1210 01:10:49.519832  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.519839  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:49.519844  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:49.519895  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:49.552725  133241 cri.go:89] found id: ""
	I1210 01:10:49.552748  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.552756  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:49.552762  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:49.552816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:49.583657  133241 cri.go:89] found id: ""
	I1210 01:10:49.583686  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.583699  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:49.583710  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:49.583771  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:49.614520  133241 cri.go:89] found id: ""
	I1210 01:10:49.614547  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.614569  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:49.614579  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:49.614644  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:49.290385  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:51.291504  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:49.321277  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:51.321612  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:49.256723  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:51.257348  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:49.646739  133241 cri.go:89] found id: ""
	I1210 01:10:49.646788  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.646800  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:49.646811  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:49.646871  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:49.680156  133241 cri.go:89] found id: ""
	I1210 01:10:49.680184  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.680195  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:49.680203  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:49.680271  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:49.711052  133241 cri.go:89] found id: ""
	I1210 01:10:49.711090  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.711103  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:49.711115  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:49.711133  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:49.765139  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:49.765173  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:49.777581  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:49.777612  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:49.842857  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:49.842882  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:49.842897  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:49.923492  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:49.923529  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:52.465282  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:52.478468  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:52.478535  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:52.514379  133241 cri.go:89] found id: ""
	I1210 01:10:52.514411  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.514420  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:52.514426  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:52.514481  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:52.545952  133241 cri.go:89] found id: ""
	I1210 01:10:52.545981  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.545991  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:52.545999  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:52.546063  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:52.581959  133241 cri.go:89] found id: ""
	I1210 01:10:52.581986  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.581995  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:52.582003  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:52.582109  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:52.634648  133241 cri.go:89] found id: ""
	I1210 01:10:52.634674  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.634686  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:52.634693  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:52.634753  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:52.668485  133241 cri.go:89] found id: ""
	I1210 01:10:52.668509  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.668518  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:52.668524  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:52.668587  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:52.702030  133241 cri.go:89] found id: ""
	I1210 01:10:52.702058  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.702067  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:52.702074  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:52.702139  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:52.736618  133241 cri.go:89] found id: ""
	I1210 01:10:52.736647  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.736655  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:52.736661  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:52.736728  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:52.769400  133241 cri.go:89] found id: ""
	I1210 01:10:52.769427  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.769436  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:52.769444  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:52.769462  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:52.808900  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:52.808936  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:52.861032  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:52.861067  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:52.874251  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:52.874281  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:52.946117  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:52.946145  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:52.946174  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:53.790452  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:55.791486  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:53.820716  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:55.822118  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:53.756664  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:56.255828  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:55.526812  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:55.541146  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:55.541232  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:55.582382  133241 cri.go:89] found id: ""
	I1210 01:10:55.582414  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.582424  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:55.582430  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:55.582483  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:55.620756  133241 cri.go:89] found id: ""
	I1210 01:10:55.620781  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.620790  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:55.620795  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:55.620865  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:55.657136  133241 cri.go:89] found id: ""
	I1210 01:10:55.657173  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.657184  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:55.657192  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:55.657253  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:55.691809  133241 cri.go:89] found id: ""
	I1210 01:10:55.691836  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.691844  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:55.691850  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:55.691901  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:55.725747  133241 cri.go:89] found id: ""
	I1210 01:10:55.725782  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.725794  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:55.725802  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:55.725870  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:55.758656  133241 cri.go:89] found id: ""
	I1210 01:10:55.758686  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.758697  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:55.758704  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:55.758766  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:55.791407  133241 cri.go:89] found id: ""
	I1210 01:10:55.791437  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.791447  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:55.791453  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:55.791522  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:55.823238  133241 cri.go:89] found id: ""
	I1210 01:10:55.823259  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.823269  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:55.823277  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:55.823288  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:55.858051  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:55.858090  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:55.910896  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:55.910928  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:55.923792  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:55.923814  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:55.994264  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:55.994283  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:55.994297  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:58.570410  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:58.582632  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:58.582709  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:58.614706  133241 cri.go:89] found id: ""
	I1210 01:10:58.614741  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.614752  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:58.614759  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:58.614820  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:58.645853  133241 cri.go:89] found id: ""
	I1210 01:10:58.645880  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.645888  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:58.645893  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:58.645946  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:58.681278  133241 cri.go:89] found id: ""
	I1210 01:10:58.681305  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.681313  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:58.681319  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:58.681376  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:58.715312  133241 cri.go:89] found id: ""
	I1210 01:10:58.715344  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.715356  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:58.715364  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:58.715434  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:58.753150  133241 cri.go:89] found id: ""
	I1210 01:10:58.753182  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.753193  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:58.753201  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:58.753275  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:58.792337  133241 cri.go:89] found id: ""
	I1210 01:10:58.792363  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.792371  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:58.792377  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:58.792424  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:58.824538  133241 cri.go:89] found id: ""
	I1210 01:10:58.824562  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.824569  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:58.824575  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:58.824626  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:58.859699  133241 cri.go:89] found id: ""
	I1210 01:10:58.859733  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.859745  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:58.859755  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:58.859768  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:58.874557  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:58.874607  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:58.942377  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:58.942399  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:58.942413  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:59.020700  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:59.020743  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:59.092780  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:59.092820  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:58.290069  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:00.290277  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:58.321783  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:00.820779  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:58.256816  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:00.756307  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:01.656942  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:01.670706  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:01.670790  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:01.704182  133241 cri.go:89] found id: ""
	I1210 01:11:01.704222  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.704235  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:01.704242  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:01.704295  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:01.737176  133241 cri.go:89] found id: ""
	I1210 01:11:01.737207  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.737216  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:01.737222  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:01.737279  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:01.771891  133241 cri.go:89] found id: ""
	I1210 01:11:01.771924  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.771935  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:01.771943  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:01.772001  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:01.804964  133241 cri.go:89] found id: ""
	I1210 01:11:01.804994  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.805005  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:01.805026  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:01.805101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:01.837156  133241 cri.go:89] found id: ""
	I1210 01:11:01.837184  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.837195  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:01.837203  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:01.837260  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:01.866759  133241 cri.go:89] found id: ""
	I1210 01:11:01.866783  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.866793  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:01.866802  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:01.866868  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:01.897349  133241 cri.go:89] found id: ""
	I1210 01:11:01.897377  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.897387  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:01.897394  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:01.897452  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:01.928390  133241 cri.go:89] found id: ""
	I1210 01:11:01.928419  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.928430  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:01.928442  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:01.928462  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:01.995531  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:01.995558  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:01.995572  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:02.073144  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:02.073178  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:02.107235  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:02.107266  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:02.159959  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:02.159993  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:02.789938  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:04.790544  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:02.821058  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:04.822126  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:02.756968  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:05.255943  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:07.256779  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:04.672775  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:04.686495  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:04.686604  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:04.720867  133241 cri.go:89] found id: ""
	I1210 01:11:04.720977  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.721005  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:04.721034  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:04.721143  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:04.757796  133241 cri.go:89] found id: ""
	I1210 01:11:04.757823  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.757831  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:04.757837  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:04.757896  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:04.799823  133241 cri.go:89] found id: ""
	I1210 01:11:04.799848  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.799856  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:04.799861  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:04.799921  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:04.848259  133241 cri.go:89] found id: ""
	I1210 01:11:04.848291  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.848303  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:04.848312  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:04.848392  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:04.898530  133241 cri.go:89] found id: ""
	I1210 01:11:04.898583  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.898596  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:04.898605  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:04.898673  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:04.935954  133241 cri.go:89] found id: ""
	I1210 01:11:04.935979  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.935987  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:04.935992  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:04.936037  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:04.970503  133241 cri.go:89] found id: ""
	I1210 01:11:04.970531  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.970538  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:04.970544  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:04.970627  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:05.003257  133241 cri.go:89] found id: ""
	I1210 01:11:05.003280  133241 logs.go:282] 0 containers: []
	W1210 01:11:05.003289  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:05.003298  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:05.003311  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:05.053816  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:05.053849  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:05.066024  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:05.066056  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:05.129515  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:05.129542  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:05.129559  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:05.203823  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:05.203861  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:07.743773  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:07.756948  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:07.757021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:07.790298  133241 cri.go:89] found id: ""
	I1210 01:11:07.790326  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.790334  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:07.790341  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:07.790432  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:07.822653  133241 cri.go:89] found id: ""
	I1210 01:11:07.822682  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.822693  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:07.822700  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:07.822754  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:07.856125  133241 cri.go:89] found id: ""
	I1210 01:11:07.856160  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.856171  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:07.856178  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:07.856247  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:07.888297  133241 cri.go:89] found id: ""
	I1210 01:11:07.888321  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.888329  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:07.888336  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:07.888394  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:07.919131  133241 cri.go:89] found id: ""
	I1210 01:11:07.919159  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.919170  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:07.919177  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:07.919245  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:07.954289  133241 cri.go:89] found id: ""
	I1210 01:11:07.954320  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.954332  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:07.954340  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:07.954396  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:07.985447  133241 cri.go:89] found id: ""
	I1210 01:11:07.985482  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.985497  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:07.985505  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:07.985560  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:08.016461  133241 cri.go:89] found id: ""
	I1210 01:11:08.016491  133241 logs.go:282] 0 containers: []
	W1210 01:11:08.016504  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:08.016516  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:08.016534  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:08.051346  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:08.051386  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:08.101708  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:08.101741  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:08.113883  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:08.113912  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:08.174656  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:08.174681  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:08.174696  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:07.289462  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:09.290707  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:11.790555  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:07.322137  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:09.821004  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:11.821064  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:09.757877  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:12.256156  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:10.751754  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:10.768007  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:10.768071  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:10.814141  133241 cri.go:89] found id: ""
	I1210 01:11:10.814167  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.814177  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:10.814187  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:10.814255  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:10.864355  133241 cri.go:89] found id: ""
	I1210 01:11:10.864379  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.864387  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:10.864392  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:10.864464  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:10.917533  133241 cri.go:89] found id: ""
	I1210 01:11:10.917563  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.917572  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:10.917579  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:10.917644  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:10.949555  133241 cri.go:89] found id: ""
	I1210 01:11:10.949589  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.949601  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:10.949609  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:10.949668  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:10.982997  133241 cri.go:89] found id: ""
	I1210 01:11:10.983022  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.983030  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:10.983036  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:10.983101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:11.016318  133241 cri.go:89] found id: ""
	I1210 01:11:11.016348  133241 logs.go:282] 0 containers: []
	W1210 01:11:11.016359  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:11.016366  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:11.016460  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:11.045980  133241 cri.go:89] found id: ""
	I1210 01:11:11.046004  133241 logs.go:282] 0 containers: []
	W1210 01:11:11.046012  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:11.046018  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:11.046067  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:11.074303  133241 cri.go:89] found id: ""
	I1210 01:11:11.074329  133241 logs.go:282] 0 containers: []
	W1210 01:11:11.074336  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:11.074346  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:11.074357  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:11.108874  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:11.108907  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:11.156642  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:11.156672  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:11.168505  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:11.168527  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:11.239949  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:11.239976  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:11.239994  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:13.828538  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:13.841876  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:13.841929  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:13.872854  133241 cri.go:89] found id: ""
	I1210 01:11:13.872884  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.872896  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:13.872904  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:13.872955  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:13.903759  133241 cri.go:89] found id: ""
	I1210 01:11:13.903790  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.903803  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:13.903812  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:13.903877  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:13.938898  133241 cri.go:89] found id: ""
	I1210 01:11:13.938921  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.938929  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:13.938934  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:13.938992  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:13.979322  133241 cri.go:89] found id: ""
	I1210 01:11:13.979343  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.979351  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:13.979358  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:13.979419  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:14.012959  133241 cri.go:89] found id: ""
	I1210 01:11:14.012984  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.012993  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:14.012999  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:14.013048  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:14.050248  133241 cri.go:89] found id: ""
	I1210 01:11:14.050274  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.050282  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:14.050288  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:14.050337  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:14.086029  133241 cri.go:89] found id: ""
	I1210 01:11:14.086061  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.086072  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:14.086080  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:14.086149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:14.119966  133241 cri.go:89] found id: ""
	I1210 01:11:14.119994  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.120002  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:14.120012  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:14.120025  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:14.133378  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:14.133406  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:14.199060  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:14.199093  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:14.199108  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:14.282056  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:14.282089  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:14.321155  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:14.321182  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:13.790898  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.290292  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:13.821872  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.320917  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:14.257094  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.755448  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.871040  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:16.882350  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:16.882417  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:16.911877  133241 cri.go:89] found id: ""
	I1210 01:11:16.911910  133241 logs.go:282] 0 containers: []
	W1210 01:11:16.911922  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:16.911930  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:16.911993  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:16.946898  133241 cri.go:89] found id: ""
	I1210 01:11:16.946931  133241 logs.go:282] 0 containers: []
	W1210 01:11:16.946945  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:16.946952  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:16.947021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:16.979154  133241 cri.go:89] found id: ""
	I1210 01:11:16.979185  133241 logs.go:282] 0 containers: []
	W1210 01:11:16.979196  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:16.979209  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:16.979293  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:17.008977  133241 cri.go:89] found id: ""
	I1210 01:11:17.009010  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.009021  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:17.009028  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:17.009093  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:17.041399  133241 cri.go:89] found id: ""
	I1210 01:11:17.041431  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.041440  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:17.041446  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:17.041505  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:17.074254  133241 cri.go:89] found id: ""
	I1210 01:11:17.074284  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.074295  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:17.074305  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:17.074385  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:17.104982  133241 cri.go:89] found id: ""
	I1210 01:11:17.105015  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.105025  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:17.105033  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:17.105094  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:17.135240  133241 cri.go:89] found id: ""
	I1210 01:11:17.135265  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.135275  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:17.135286  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:17.135298  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:17.186952  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:17.187004  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:17.201444  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:17.201472  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:17.272210  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:17.272229  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:17.272245  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:17.355218  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:17.355256  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:18.290407  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:20.292289  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:18.321390  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:20.321550  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:18.756823  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:21.256882  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:19.892863  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:19.905069  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:19.905138  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:19.943515  133241 cri.go:89] found id: ""
	I1210 01:11:19.943544  133241 logs.go:282] 0 containers: []
	W1210 01:11:19.943557  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:19.943566  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:19.943629  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:19.974474  133241 cri.go:89] found id: ""
	I1210 01:11:19.974499  133241 logs.go:282] 0 containers: []
	W1210 01:11:19.974509  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:19.974517  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:19.974597  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:20.008980  133241 cri.go:89] found id: ""
	I1210 01:11:20.009011  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.009023  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:20.009030  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:20.009097  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:20.040655  133241 cri.go:89] found id: ""
	I1210 01:11:20.040681  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.040690  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:20.040696  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:20.040745  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:20.073761  133241 cri.go:89] found id: ""
	I1210 01:11:20.073788  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.073799  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:20.073806  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:20.073873  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:20.104381  133241 cri.go:89] found id: ""
	I1210 01:11:20.104410  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.104421  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:20.104429  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:20.104489  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:20.138130  133241 cri.go:89] found id: ""
	I1210 01:11:20.138158  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.138167  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:20.138173  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:20.138229  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:20.166883  133241 cri.go:89] found id: ""
	I1210 01:11:20.166908  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.166916  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:20.166926  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:20.166940  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:20.199437  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:20.199470  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:20.247384  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:20.247418  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:20.260363  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:20.260392  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:20.330260  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:20.330283  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:20.330299  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:22.912818  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:22.925241  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:22.925316  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:22.957975  133241 cri.go:89] found id: ""
	I1210 01:11:22.958003  133241 logs.go:282] 0 containers: []
	W1210 01:11:22.958015  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:22.958023  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:22.958087  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:22.991067  133241 cri.go:89] found id: ""
	I1210 01:11:22.991098  133241 logs.go:282] 0 containers: []
	W1210 01:11:22.991109  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:22.991117  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:22.991177  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:23.022191  133241 cri.go:89] found id: ""
	I1210 01:11:23.022280  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.022297  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:23.022307  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:23.022373  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:23.055399  133241 cri.go:89] found id: ""
	I1210 01:11:23.055427  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.055435  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:23.055440  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:23.055504  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:23.085084  133241 cri.go:89] found id: ""
	I1210 01:11:23.085114  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.085126  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:23.085133  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:23.085195  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:23.114896  133241 cri.go:89] found id: ""
	I1210 01:11:23.114921  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.114929  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:23.114935  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:23.114995  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:23.146419  133241 cri.go:89] found id: ""
	I1210 01:11:23.146450  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.146463  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:23.146470  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:23.146546  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:23.178747  133241 cri.go:89] found id: ""
	I1210 01:11:23.178774  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.178782  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:23.178792  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:23.178804  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:23.230574  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:23.230609  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:23.242622  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:23.242649  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:23.315830  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:23.315850  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:23.315862  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:23.394054  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:23.394091  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:22.790004  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:24.790395  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:26.790583  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:22.821008  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:25.321294  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:23.758460  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:26.257243  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:25.930799  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:25.943287  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:25.943351  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:25.975836  133241 cri.go:89] found id: ""
	I1210 01:11:25.975866  133241 logs.go:282] 0 containers: []
	W1210 01:11:25.975877  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:25.975884  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:25.975948  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:26.008518  133241 cri.go:89] found id: ""
	I1210 01:11:26.008545  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.008553  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:26.008560  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:26.008607  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:26.041953  133241 cri.go:89] found id: ""
	I1210 01:11:26.041992  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.042002  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:26.042009  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:26.042076  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:26.071782  133241 cri.go:89] found id: ""
	I1210 01:11:26.071809  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.071821  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:26.071829  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:26.071894  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:26.101051  133241 cri.go:89] found id: ""
	I1210 01:11:26.101075  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.101084  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:26.101089  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:26.101135  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:26.135274  133241 cri.go:89] found id: ""
	I1210 01:11:26.135300  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.135308  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:26.135315  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:26.135368  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:26.168190  133241 cri.go:89] found id: ""
	I1210 01:11:26.168216  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.168224  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:26.168230  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:26.168293  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:26.198453  133241 cri.go:89] found id: ""
	I1210 01:11:26.198482  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.198492  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:26.198505  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:26.198524  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:26.211436  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:26.211460  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:26.273940  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:26.273964  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:26.273980  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:26.353198  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:26.353232  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:26.389823  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:26.389857  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:28.940375  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:28.952619  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:28.952676  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:28.984886  133241 cri.go:89] found id: ""
	I1210 01:11:28.984914  133241 logs.go:282] 0 containers: []
	W1210 01:11:28.984923  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:28.984929  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:28.984978  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:29.015424  133241 cri.go:89] found id: ""
	I1210 01:11:29.015453  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.015463  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:29.015469  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:29.015520  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:29.045941  133241 cri.go:89] found id: ""
	I1210 01:11:29.045977  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.045989  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:29.045997  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:29.046065  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:29.077346  133241 cri.go:89] found id: ""
	I1210 01:11:29.077375  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.077384  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:29.077389  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:29.077442  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:29.109825  133241 cri.go:89] found id: ""
	I1210 01:11:29.109861  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.109873  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:29.109880  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:29.109946  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:29.141601  133241 cri.go:89] found id: ""
	I1210 01:11:29.141633  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.141645  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:29.141656  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:29.141720  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:29.172711  133241 cri.go:89] found id: ""
	I1210 01:11:29.172747  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.172758  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:29.172766  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:29.172830  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:29.205247  133241 cri.go:89] found id: ""
	I1210 01:11:29.205272  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.205283  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:29.205296  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:29.205310  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:29.255917  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:29.255954  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:29.269246  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:29.269276  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:29.339509  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:29.339535  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:29.339550  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:29.414320  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:29.414358  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:29.291191  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:31.790102  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:27.820810  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:30.321256  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:28.756034  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:30.757633  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:31.950667  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:31.963020  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:31.963083  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:31.994537  133241 cri.go:89] found id: ""
	I1210 01:11:31.994586  133241 logs.go:282] 0 containers: []
	W1210 01:11:31.994598  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:31.994606  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:31.994672  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:32.028601  133241 cri.go:89] found id: ""
	I1210 01:11:32.028632  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.028643  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:32.028651  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:32.028710  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:32.060238  133241 cri.go:89] found id: ""
	I1210 01:11:32.060265  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.060273  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:32.060280  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:32.060344  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:32.094421  133241 cri.go:89] found id: ""
	I1210 01:11:32.094446  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.094454  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:32.094460  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:32.094509  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:32.128237  133241 cri.go:89] found id: ""
	I1210 01:11:32.128266  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.128277  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:32.128285  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:32.128355  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:32.163139  133241 cri.go:89] found id: ""
	I1210 01:11:32.163163  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.163172  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:32.163179  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:32.163237  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:32.194077  133241 cri.go:89] found id: ""
	I1210 01:11:32.194108  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.194119  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:32.194126  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:32.194187  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:32.224914  133241 cri.go:89] found id: ""
	I1210 01:11:32.224941  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.224952  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:32.224964  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:32.224980  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:32.275194  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:32.275230  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:32.287642  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:32.287670  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:32.350922  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:32.350953  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:32.350971  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:32.431573  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:32.431610  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:33.790816  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:35.791330  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:32.321475  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:34.823056  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:33.256524  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:35.755851  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:34.969741  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:34.982487  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:34.982541  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:35.015370  133241 cri.go:89] found id: ""
	I1210 01:11:35.015408  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.015419  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:35.015428  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:35.015494  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:35.047381  133241 cri.go:89] found id: ""
	I1210 01:11:35.047418  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.047430  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:35.047437  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:35.047501  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:35.077282  133241 cri.go:89] found id: ""
	I1210 01:11:35.077305  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.077314  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:35.077320  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:35.077380  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:35.107625  133241 cri.go:89] found id: ""
	I1210 01:11:35.107653  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.107664  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:35.107671  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:35.107723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:35.137919  133241 cri.go:89] found id: ""
	I1210 01:11:35.137949  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.137962  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:35.137970  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:35.138037  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:35.170914  133241 cri.go:89] found id: ""
	I1210 01:11:35.170939  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.170947  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:35.170962  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:35.171021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:35.201719  133241 cri.go:89] found id: ""
	I1210 01:11:35.201747  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.201755  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:35.201761  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:35.201821  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:35.230544  133241 cri.go:89] found id: ""
	I1210 01:11:35.230582  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.230595  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:35.230607  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:35.230622  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:35.243184  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:35.243210  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:35.311888  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:35.311915  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:35.311931  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:35.387377  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:35.387411  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:35.424087  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:35.424121  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:37.977530  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:37.989741  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:37.989811  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:38.023765  133241 cri.go:89] found id: ""
	I1210 01:11:38.023789  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.023799  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:38.023808  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:38.023871  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:38.060456  133241 cri.go:89] found id: ""
	I1210 01:11:38.060487  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.060498  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:38.060505  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:38.060558  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:38.092589  133241 cri.go:89] found id: ""
	I1210 01:11:38.092612  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.092620  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:38.092626  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:38.092676  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:38.126075  133241 cri.go:89] found id: ""
	I1210 01:11:38.126115  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.126127  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:38.126137  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:38.126216  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:38.158861  133241 cri.go:89] found id: ""
	I1210 01:11:38.158892  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.158905  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:38.158911  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:38.158966  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:38.189136  133241 cri.go:89] found id: ""
	I1210 01:11:38.189164  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.189172  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:38.189178  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:38.189227  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:38.220497  133241 cri.go:89] found id: ""
	I1210 01:11:38.220522  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.220530  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:38.220536  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:38.220585  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:38.253480  133241 cri.go:89] found id: ""
	I1210 01:11:38.253515  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.253527  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:38.253539  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:38.253554  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:38.334967  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:38.335006  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:38.375521  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:38.375551  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:38.429375  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:38.429419  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:38.442488  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:38.442527  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:38.504243  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:38.290594  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:40.290705  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:37.322067  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:39.822004  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:37.756517  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:40.256112  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:42.256624  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:41.005015  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:41.018073  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:41.018149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:41.049377  133241 cri.go:89] found id: ""
	I1210 01:11:41.049409  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.049421  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:41.049429  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:41.049495  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:41.080430  133241 cri.go:89] found id: ""
	I1210 01:11:41.080466  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.080476  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:41.080482  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:41.080543  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:41.113179  133241 cri.go:89] found id: ""
	I1210 01:11:41.113210  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.113222  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:41.113229  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:41.113298  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:41.144493  133241 cri.go:89] found id: ""
	I1210 01:11:41.144523  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.144535  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:41.144545  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:41.144612  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:41.174786  133241 cri.go:89] found id: ""
	I1210 01:11:41.174818  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.174828  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:41.174835  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:41.174903  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:41.205010  133241 cri.go:89] found id: ""
	I1210 01:11:41.205050  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.205063  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:41.205072  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:41.205142  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:41.236095  133241 cri.go:89] found id: ""
	I1210 01:11:41.236120  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.236131  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:41.236138  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:41.236200  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:41.267610  133241 cri.go:89] found id: ""
	I1210 01:11:41.267639  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.267654  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:41.267665  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:41.267681  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:41.302639  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:41.302669  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:41.352311  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:41.352343  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:41.365111  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:41.365140  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:41.434174  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:41.434197  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:41.434214  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:44.018219  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:44.030886  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:44.030961  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:44.072932  133241 cri.go:89] found id: ""
	I1210 01:11:44.072954  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.072962  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:44.072968  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:44.073015  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:44.110425  133241 cri.go:89] found id: ""
	I1210 01:11:44.110456  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.110466  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:44.110473  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:44.110539  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:44.148811  133241 cri.go:89] found id: ""
	I1210 01:11:44.148837  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.148848  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:44.148855  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:44.148922  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:44.184181  133241 cri.go:89] found id: ""
	I1210 01:11:44.184205  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.184213  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:44.184219  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:44.184268  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:44.213545  133241 cri.go:89] found id: ""
	I1210 01:11:44.213578  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.213590  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:44.213597  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:44.213658  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:44.246979  133241 cri.go:89] found id: ""
	I1210 01:11:44.247012  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.247024  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:44.247032  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:44.247095  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:44.280902  133241 cri.go:89] found id: ""
	I1210 01:11:44.280939  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.280950  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:44.280958  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:44.281035  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:44.310824  133241 cri.go:89] found id: ""
	I1210 01:11:44.310848  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.310859  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:44.310870  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:44.310887  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:44.389324  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:44.389354  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:44.425351  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:44.425388  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:44.478151  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:44.478197  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:44.491139  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:44.491171  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:44.552150  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:42.790792  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:45.289730  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:42.321108  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:44.321367  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:46.820868  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:44.258348  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:46.756838  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:47.052917  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:47.065698  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:47.065764  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:47.098483  133241 cri.go:89] found id: ""
	I1210 01:11:47.098518  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.098530  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:47.098538  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:47.098617  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:47.129042  133241 cri.go:89] found id: ""
	I1210 01:11:47.129073  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.129082  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:47.129088  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:47.129157  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:47.160050  133241 cri.go:89] found id: ""
	I1210 01:11:47.160083  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.160094  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:47.160101  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:47.160167  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:47.190078  133241 cri.go:89] found id: ""
	I1210 01:11:47.190111  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.190120  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:47.190126  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:47.190180  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:47.218975  133241 cri.go:89] found id: ""
	I1210 01:11:47.219007  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.219020  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:47.219028  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:47.219088  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:47.248644  133241 cri.go:89] found id: ""
	I1210 01:11:47.248679  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.248689  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:47.248694  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:47.248743  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:47.284306  133241 cri.go:89] found id: ""
	I1210 01:11:47.284332  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.284339  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:47.284345  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:47.284394  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:47.314682  133241 cri.go:89] found id: ""
	I1210 01:11:47.314704  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.314712  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:47.314721  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:47.314733  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:47.365334  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:47.365364  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:47.378184  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:47.378215  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:47.445591  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:47.445619  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:47.445642  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:47.523176  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:47.523214  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:47.291212  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:49.790326  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:51.790425  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:48.821947  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:51.321998  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:49.255902  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:51.256638  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:50.059060  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:50.071413  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:50.071489  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:50.104600  133241 cri.go:89] found id: ""
	I1210 01:11:50.104632  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.104644  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:50.104652  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:50.104715  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:50.136915  133241 cri.go:89] found id: ""
	I1210 01:11:50.136947  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.136957  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:50.136968  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:50.137038  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:50.172552  133241 cri.go:89] found id: ""
	I1210 01:11:50.172582  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.172593  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:50.172604  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:50.172668  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:50.202583  133241 cri.go:89] found id: ""
	I1210 01:11:50.202613  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.202626  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:50.202634  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:50.202696  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:50.232446  133241 cri.go:89] found id: ""
	I1210 01:11:50.232473  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.232483  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:50.232491  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:50.232555  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:50.271296  133241 cri.go:89] found id: ""
	I1210 01:11:50.271321  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.271332  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:50.271340  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:50.271404  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:50.304185  133241 cri.go:89] found id: ""
	I1210 01:11:50.304216  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.304227  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:50.304235  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:50.304298  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:50.338004  133241 cri.go:89] found id: ""
	I1210 01:11:50.338030  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.338041  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:50.338051  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:50.338066  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:50.374374  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:50.374403  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:50.427315  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:50.427346  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:50.439862  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:50.439890  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:50.505410  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:50.505441  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:50.505458  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:53.081065  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:53.093760  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:53.093816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:53.126125  133241 cri.go:89] found id: ""
	I1210 01:11:53.126160  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.126172  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:53.126180  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:53.126252  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:53.157694  133241 cri.go:89] found id: ""
	I1210 01:11:53.157719  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.157727  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:53.157732  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:53.157787  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:53.188784  133241 cri.go:89] found id: ""
	I1210 01:11:53.188812  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.188820  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:53.188826  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:53.188882  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:53.220025  133241 cri.go:89] found id: ""
	I1210 01:11:53.220056  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.220066  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:53.220074  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:53.220133  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:53.254601  133241 cri.go:89] found id: ""
	I1210 01:11:53.254632  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.254641  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:53.254649  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:53.254718  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:53.286858  133241 cri.go:89] found id: ""
	I1210 01:11:53.286896  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.286906  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:53.286917  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:53.286979  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:53.322063  133241 cri.go:89] found id: ""
	I1210 01:11:53.322087  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.322096  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:53.322104  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:53.322175  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:53.353598  133241 cri.go:89] found id: ""
	I1210 01:11:53.353624  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.353632  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:53.353641  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:53.353653  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:53.400634  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:53.400660  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:53.412838  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:53.412870  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:53.475152  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:53.475176  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:53.475191  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:53.551193  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:53.551236  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:54.290077  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:56.290911  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:53.322201  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:55.821982  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:53.257982  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:55.756075  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:56.089703  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:56.102065  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:56.102158  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:56.137385  133241 cri.go:89] found id: ""
	I1210 01:11:56.137410  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.137418  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:56.137424  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:56.137489  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:56.173717  133241 cri.go:89] found id: ""
	I1210 01:11:56.173748  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.173756  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:56.173762  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:56.173823  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:56.209007  133241 cri.go:89] found id: ""
	I1210 01:11:56.209031  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.209038  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:56.209044  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:56.209106  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:56.247599  133241 cri.go:89] found id: ""
	I1210 01:11:56.247628  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.247636  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:56.247642  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:56.247701  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:56.279510  133241 cri.go:89] found id: ""
	I1210 01:11:56.279535  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.279544  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:56.279550  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:56.279600  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:56.311644  133241 cri.go:89] found id: ""
	I1210 01:11:56.311665  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.311672  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:56.311678  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:56.311722  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:56.343277  133241 cri.go:89] found id: ""
	I1210 01:11:56.343306  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.343317  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:56.343324  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:56.343384  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:56.396352  133241 cri.go:89] found id: ""
	I1210 01:11:56.396380  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.396388  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:56.396397  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:56.396409  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:56.408726  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:56.408754  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:56.483943  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:56.483970  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:56.483987  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:56.566841  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:56.566874  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:56.604048  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:56.604083  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:59.154979  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:59.167727  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:59.167803  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:59.198861  133241 cri.go:89] found id: ""
	I1210 01:11:59.198886  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.198894  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:59.198901  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:59.198953  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:59.232900  133241 cri.go:89] found id: ""
	I1210 01:11:59.232935  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.232947  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:59.232955  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:59.233024  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:59.267532  133241 cri.go:89] found id: ""
	I1210 01:11:59.267558  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.267566  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:59.267571  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:59.267633  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:59.298091  133241 cri.go:89] found id: ""
	I1210 01:11:59.298120  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.298130  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:59.298140  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:59.298199  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:59.327848  133241 cri.go:89] found id: ""
	I1210 01:11:59.327879  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.327889  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:59.327897  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:59.327957  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:59.356570  133241 cri.go:89] found id: ""
	I1210 01:11:59.356601  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.356617  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:59.356626  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:59.356686  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:59.387756  133241 cri.go:89] found id: ""
	I1210 01:11:59.387780  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.387788  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:59.387793  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:59.387843  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:59.419836  133241 cri.go:89] found id: ""
	I1210 01:11:59.419869  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.419878  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:59.419887  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:59.419902  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:59.469663  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:59.469697  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:59.482738  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:59.482768  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:59.548687  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:59.548717  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:59.548739  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:58.790282  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:01.290379  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:58.320794  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:00.821991  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:57.756197  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:00.256511  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:59.625772  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:59.625809  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:02.163527  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:02.175510  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:02.175569  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:02.209432  133241 cri.go:89] found id: ""
	I1210 01:12:02.209462  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.209474  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:02.209481  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:02.209535  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:02.241027  133241 cri.go:89] found id: ""
	I1210 01:12:02.241050  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.241059  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:02.241064  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:02.241113  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:02.272251  133241 cri.go:89] found id: ""
	I1210 01:12:02.272277  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.272286  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:02.272293  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:02.272355  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:02.305879  133241 cri.go:89] found id: ""
	I1210 01:12:02.305903  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.305913  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:02.305920  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:02.305978  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:02.339219  133241 cri.go:89] found id: ""
	I1210 01:12:02.339248  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.339263  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:02.339271  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:02.339333  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:02.375203  133241 cri.go:89] found id: ""
	I1210 01:12:02.375240  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.375252  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:02.375260  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:02.375326  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:02.406364  133241 cri.go:89] found id: ""
	I1210 01:12:02.406396  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.406406  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:02.406413  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:02.406472  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:02.441572  133241 cri.go:89] found id: ""
	I1210 01:12:02.441602  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.441614  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:02.441627  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:02.441642  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:02.454215  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:02.454241  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:02.526345  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:02.526368  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:02.526388  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:02.603813  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:02.603845  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:02.640102  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:02.640136  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:03.291135  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.792322  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:03.321084  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.322066  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:02.755961  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.256774  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.189319  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:05.201957  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:05.202022  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:05.242211  133241 cri.go:89] found id: ""
	I1210 01:12:05.242238  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.242247  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:05.242253  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:05.242300  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:05.277287  133241 cri.go:89] found id: ""
	I1210 01:12:05.277309  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.277317  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:05.277323  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:05.277382  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:05.309455  133241 cri.go:89] found id: ""
	I1210 01:12:05.309480  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.309488  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:05.309493  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:05.309540  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:05.344117  133241 cri.go:89] found id: ""
	I1210 01:12:05.344143  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.344156  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:05.344164  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:05.344222  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:05.375039  133241 cri.go:89] found id: ""
	I1210 01:12:05.375067  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.375079  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:05.375086  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:05.375146  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:05.407623  133241 cri.go:89] found id: ""
	I1210 01:12:05.407649  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.407657  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:05.407665  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:05.407723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:05.441018  133241 cri.go:89] found id: ""
	I1210 01:12:05.441047  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.441055  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:05.441061  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:05.441123  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:05.471864  133241 cri.go:89] found id: ""
	I1210 01:12:05.471895  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.471907  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:05.471918  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:05.471931  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:05.536855  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:05.536881  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:05.536896  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:05.617577  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:05.617612  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:05.654150  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:05.654188  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:05.707690  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:05.707720  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:08.220391  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:08.232904  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:08.232961  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:08.271892  133241 cri.go:89] found id: ""
	I1210 01:12:08.271921  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.271933  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:08.271939  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:08.272004  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:08.304534  133241 cri.go:89] found id: ""
	I1210 01:12:08.304556  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.304563  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:08.304569  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:08.304620  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:08.338410  133241 cri.go:89] found id: ""
	I1210 01:12:08.338441  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.338451  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:08.338459  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:08.338523  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:08.370412  133241 cri.go:89] found id: ""
	I1210 01:12:08.370438  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.370449  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:08.370456  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:08.370515  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:08.401137  133241 cri.go:89] found id: ""
	I1210 01:12:08.401161  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.401169  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:08.401175  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:08.401224  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:08.436185  133241 cri.go:89] found id: ""
	I1210 01:12:08.436220  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.436232  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:08.436241  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:08.436308  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:08.468648  133241 cri.go:89] found id: ""
	I1210 01:12:08.468677  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.468696  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:08.468704  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:08.468764  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:08.506817  133241 cri.go:89] found id: ""
	I1210 01:12:08.506852  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.506865  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:08.506878  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:08.506898  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:08.565209  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:08.565240  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:08.581630  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:08.581675  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:08.663163  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:08.663189  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:08.663201  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:08.744843  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:08.744888  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:08.290806  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:10.790414  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:07.821280  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:09.821710  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:07.755386  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:09.759064  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:12.256087  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:11.282449  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:11.295381  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:11.295443  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:11.328119  133241 cri.go:89] found id: ""
	I1210 01:12:11.328145  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.328156  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:11.328162  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:11.328215  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:11.360864  133241 cri.go:89] found id: ""
	I1210 01:12:11.360895  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.360906  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:11.360914  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:11.360979  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:11.394838  133241 cri.go:89] found id: ""
	I1210 01:12:11.394862  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.394871  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:11.394876  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:11.394928  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:11.424174  133241 cri.go:89] found id: ""
	I1210 01:12:11.424216  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.424228  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:11.424236  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:11.424295  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:11.455057  133241 cri.go:89] found id: ""
	I1210 01:12:11.455083  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.455095  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:11.455102  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:11.455173  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:11.485755  133241 cri.go:89] found id: ""
	I1210 01:12:11.485783  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.485791  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:11.485797  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:11.485850  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:11.516921  133241 cri.go:89] found id: ""
	I1210 01:12:11.516952  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.516963  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:11.516970  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:11.517029  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:11.547484  133241 cri.go:89] found id: ""
	I1210 01:12:11.547510  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.547518  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:11.547527  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:11.547540  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:11.582392  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:11.582419  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:11.635271  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:11.635306  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:11.647460  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:11.647492  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:11.713562  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:11.713584  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:11.713599  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:14.299112  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:14.314813  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:14.314886  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:14.365870  133241 cri.go:89] found id: ""
	I1210 01:12:14.365907  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.365925  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:14.365934  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:14.365998  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:14.399023  133241 cri.go:89] found id: ""
	I1210 01:12:14.399046  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.399054  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:14.399060  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:14.399106  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:14.432464  133241 cri.go:89] found id: ""
	I1210 01:12:14.432490  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.432498  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:14.432504  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:14.432559  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:14.462625  133241 cri.go:89] found id: ""
	I1210 01:12:14.462657  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.462668  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:14.462675  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:14.462723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:14.494853  133241 cri.go:89] found id: ""
	I1210 01:12:14.494884  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.494895  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:14.494903  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:14.494968  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:14.528863  133241 cri.go:89] found id: ""
	I1210 01:12:14.528898  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.528909  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:14.528917  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:14.528985  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:14.563527  133241 cri.go:89] found id: ""
	I1210 01:12:14.563557  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.563568  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:14.563575  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:14.563633  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:14.592383  133241 cri.go:89] found id: ""
	I1210 01:12:14.592419  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.592429  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:14.592440  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:14.592453  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:14.604471  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:14.604498  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:12:12.790681  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:15.289761  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:12.321375  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:14.321765  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:16.820568  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:14.256568  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:16.755323  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	W1210 01:12:14.671647  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:14.671673  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:14.671686  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:14.749619  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:14.749648  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:14.783668  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:14.783700  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:17.337203  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:17.349666  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:17.349726  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:17.380558  133241 cri.go:89] found id: ""
	I1210 01:12:17.380584  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.380595  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:17.380603  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:17.380663  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:17.413026  133241 cri.go:89] found id: ""
	I1210 01:12:17.413060  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.413072  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:17.413080  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:17.413149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:17.444972  133241 cri.go:89] found id: ""
	I1210 01:12:17.445003  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.445014  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:17.445022  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:17.445081  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:17.477555  133241 cri.go:89] found id: ""
	I1210 01:12:17.477580  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.477588  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:17.477594  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:17.477641  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:17.508550  133241 cri.go:89] found id: ""
	I1210 01:12:17.508574  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.508582  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:17.508588  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:17.508671  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:17.538537  133241 cri.go:89] found id: ""
	I1210 01:12:17.538574  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.538586  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:17.538594  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:17.538655  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:17.571816  133241 cri.go:89] found id: ""
	I1210 01:12:17.571843  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.571851  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:17.571859  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:17.571916  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:17.602437  133241 cri.go:89] found id: ""
	I1210 01:12:17.602465  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.602484  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:17.602502  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:17.602517  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:17.652904  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:17.652936  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:17.664983  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:17.665006  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:17.732580  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:17.732606  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:17.732622  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:17.813561  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:17.813598  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:17.290624  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:19.291031  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:21.790058  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:18.821021  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:20.821538  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:18.755611  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:20.756570  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:20.349846  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:20.361680  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:20.361816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:20.394316  133241 cri.go:89] found id: ""
	I1210 01:12:20.394338  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.394345  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:20.394350  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:20.394395  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:20.432172  133241 cri.go:89] found id: ""
	I1210 01:12:20.432196  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.432204  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:20.432209  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:20.432256  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:20.464019  133241 cri.go:89] found id: ""
	I1210 01:12:20.464042  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.464049  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:20.464055  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:20.464101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:20.496239  133241 cri.go:89] found id: ""
	I1210 01:12:20.496264  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.496271  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:20.496277  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:20.496325  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:20.527890  133241 cri.go:89] found id: ""
	I1210 01:12:20.527920  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.527932  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:20.527939  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:20.527996  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:20.558333  133241 cri.go:89] found id: ""
	I1210 01:12:20.558360  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.558368  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:20.558374  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:20.558425  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:20.589431  133241 cri.go:89] found id: ""
	I1210 01:12:20.589461  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.589472  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:20.589480  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:20.589542  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:20.618988  133241 cri.go:89] found id: ""
	I1210 01:12:20.619018  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.619032  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:20.619042  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:20.619056  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:20.669620  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:20.669648  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:20.681405  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:20.681428  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:20.745196  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:20.745226  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:20.745243  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:20.823522  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:20.823548  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:23.360499  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:23.373249  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:23.373315  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:23.405186  133241 cri.go:89] found id: ""
	I1210 01:12:23.405207  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.405215  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:23.405224  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:23.405269  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:23.440082  133241 cri.go:89] found id: ""
	I1210 01:12:23.440118  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.440138  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:23.440146  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:23.440217  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:23.473962  133241 cri.go:89] found id: ""
	I1210 01:12:23.473991  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.474001  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:23.474010  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:23.474066  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:23.505004  133241 cri.go:89] found id: ""
	I1210 01:12:23.505028  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.505036  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:23.505042  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:23.505087  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:23.539383  133241 cri.go:89] found id: ""
	I1210 01:12:23.539416  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.539427  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:23.539435  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:23.539502  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:23.569371  133241 cri.go:89] found id: ""
	I1210 01:12:23.569402  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.569412  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:23.569420  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:23.569487  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:23.599718  133241 cri.go:89] found id: ""
	I1210 01:12:23.599740  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.599748  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:23.599754  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:23.599798  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:23.633483  133241 cri.go:89] found id: ""
	I1210 01:12:23.633513  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.633527  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:23.633538  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:23.633572  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:23.645791  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:23.645814  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:23.706819  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:23.706842  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:23.706858  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:23.792257  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:23.792283  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:23.832356  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:23.832384  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:23.790991  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:26.289467  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:23.321221  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:25.321373  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:23.256427  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:25.256459  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:27.257652  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:26.383157  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:26.395778  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:26.395834  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:26.428709  133241 cri.go:89] found id: ""
	I1210 01:12:26.428738  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.428750  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:26.428758  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:26.428823  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:26.463421  133241 cri.go:89] found id: ""
	I1210 01:12:26.463451  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.463470  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:26.463479  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:26.463541  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:26.494783  133241 cri.go:89] found id: ""
	I1210 01:12:26.494813  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.494826  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:26.494834  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:26.494894  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:26.524395  133241 cri.go:89] found id: ""
	I1210 01:12:26.524423  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.524434  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:26.524442  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:26.524505  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:26.554102  133241 cri.go:89] found id: ""
	I1210 01:12:26.554135  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.554146  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:26.554153  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:26.554218  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:26.584091  133241 cri.go:89] found id: ""
	I1210 01:12:26.584119  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.584127  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:26.584133  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:26.584188  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:26.618194  133241 cri.go:89] found id: ""
	I1210 01:12:26.618221  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.618229  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:26.618234  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:26.618282  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:26.652597  133241 cri.go:89] found id: ""
	I1210 01:12:26.652632  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.652643  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:26.652657  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:26.652674  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:26.724236  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:26.724262  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:26.724277  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:26.802706  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:26.802745  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:26.851153  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:26.851184  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:26.902459  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:26.902489  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:29.415298  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:29.428093  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:29.428168  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:29.460651  133241 cri.go:89] found id: ""
	I1210 01:12:29.460678  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.460686  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:29.460692  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:29.460745  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:29.490971  133241 cri.go:89] found id: ""
	I1210 01:12:29.491000  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.491009  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:29.491015  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:29.491064  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:29.521465  133241 cri.go:89] found id: ""
	I1210 01:12:29.521496  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.521509  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:29.521517  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:29.521592  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:29.555709  133241 cri.go:89] found id: ""
	I1210 01:12:29.555736  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.555744  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:29.555750  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:29.555812  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:29.589891  133241 cri.go:89] found id: ""
	I1210 01:12:29.589918  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.589928  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:29.589935  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:29.590006  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:29.620929  133241 cri.go:89] found id: ""
	I1210 01:12:29.620959  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.620989  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:29.620998  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:29.621060  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:28.290708  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:30.290750  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:27.822436  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:30.320877  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:29.756698  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:31.756872  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:29.652297  133241 cri.go:89] found id: ""
	I1210 01:12:29.652322  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.652332  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:29.652339  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:29.652400  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:29.685881  133241 cri.go:89] found id: ""
	I1210 01:12:29.685904  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.685912  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:29.685922  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:29.685936  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:29.734856  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:29.734889  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:29.747270  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:29.747297  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:29.811253  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:29.811276  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:29.811292  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:29.888151  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:29.888187  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:32.425743  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:32.438647  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:32.438723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:32.477466  133241 cri.go:89] found id: ""
	I1210 01:12:32.477489  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.477498  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:32.477503  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:32.477553  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:32.509698  133241 cri.go:89] found id: ""
	I1210 01:12:32.509732  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.509746  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:32.509753  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:32.509811  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:32.540873  133241 cri.go:89] found id: ""
	I1210 01:12:32.540903  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.540911  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:32.540919  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:32.540981  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:32.571143  133241 cri.go:89] found id: ""
	I1210 01:12:32.571168  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.571179  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:32.571186  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:32.571253  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:32.604797  133241 cri.go:89] found id: ""
	I1210 01:12:32.604829  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.604839  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:32.604847  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:32.604902  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:32.640179  133241 cri.go:89] found id: ""
	I1210 01:12:32.640204  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.640212  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:32.640218  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:32.640265  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:32.671103  133241 cri.go:89] found id: ""
	I1210 01:12:32.671130  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.671138  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:32.671144  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:32.671195  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:32.709038  133241 cri.go:89] found id: ""
	I1210 01:12:32.709069  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.709080  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:32.709092  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:32.709107  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:32.764933  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:32.764963  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:32.777149  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:32.777172  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:32.842233  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:32.842256  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:32.842273  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:32.923533  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:32.923569  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:32.291302  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:34.790708  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:32.321782  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:34.821161  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:36.821244  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:34.256937  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:36.756894  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:35.462284  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:35.476392  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:35.476465  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:35.509483  133241 cri.go:89] found id: ""
	I1210 01:12:35.509507  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.509515  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:35.509521  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:35.509568  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:35.546324  133241 cri.go:89] found id: ""
	I1210 01:12:35.546357  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.546369  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:35.546385  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:35.546457  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:35.580578  133241 cri.go:89] found id: ""
	I1210 01:12:35.580608  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.580618  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:35.580626  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:35.580695  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:35.613220  133241 cri.go:89] found id: ""
	I1210 01:12:35.613245  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.613253  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:35.613259  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:35.613318  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:35.650713  133241 cri.go:89] found id: ""
	I1210 01:12:35.650741  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.650751  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:35.650757  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:35.650826  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:35.685084  133241 cri.go:89] found id: ""
	I1210 01:12:35.685121  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.685134  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:35.685141  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:35.685196  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:35.717092  133241 cri.go:89] found id: ""
	I1210 01:12:35.717118  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.717130  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:35.717141  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:35.717197  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:35.753691  133241 cri.go:89] found id: ""
	I1210 01:12:35.753722  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.753732  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:35.753751  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:35.753766  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:35.807280  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:35.807314  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:35.821862  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:35.821894  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:35.892640  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:35.892667  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:35.892684  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:35.967250  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:35.967291  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:38.505643  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:38.518703  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:38.518762  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:38.554866  133241 cri.go:89] found id: ""
	I1210 01:12:38.554904  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.554917  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:38.554926  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:38.554983  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:38.586725  133241 cri.go:89] found id: ""
	I1210 01:12:38.586757  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.586770  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:38.586779  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:38.586840  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:38.617766  133241 cri.go:89] found id: ""
	I1210 01:12:38.617791  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.617799  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:38.617804  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:38.617855  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:38.647743  133241 cri.go:89] found id: ""
	I1210 01:12:38.647770  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.647779  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:38.647785  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:38.647838  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:38.680523  133241 cri.go:89] found id: ""
	I1210 01:12:38.680553  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.680564  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:38.680572  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:38.680634  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:38.714271  133241 cri.go:89] found id: ""
	I1210 01:12:38.714299  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.714307  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:38.714314  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:38.714366  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:38.751180  133241 cri.go:89] found id: ""
	I1210 01:12:38.751213  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.751226  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:38.751235  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:38.751307  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:38.783754  133241 cri.go:89] found id: ""
	I1210 01:12:38.783778  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.783787  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:38.783796  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:38.783807  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:38.843285  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:38.843332  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:38.856901  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:38.856935  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:38.923720  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:38.923747  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:38.923764  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:39.002855  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:39.002898  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:37.290816  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:38.785325  132693 pod_ready.go:82] duration metric: took 4m0.000828619s for pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace to be "Ready" ...
	E1210 01:12:38.785348  132693 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 01:12:38.785371  132693 pod_ready.go:39] duration metric: took 4m7.530994938s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:12:38.785436  132693 kubeadm.go:597] duration metric: took 4m15.56153133s to restartPrimaryControlPlane
	W1210 01:12:38.785555  132693 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 01:12:38.785612  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:12:38.822192  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:41.321407  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:39.256018  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:41.256892  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:41.542152  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:41.556438  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:41.556517  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:41.587666  133241 cri.go:89] found id: ""
	I1210 01:12:41.587695  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.587706  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:41.587714  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:41.587772  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:41.620472  133241 cri.go:89] found id: ""
	I1210 01:12:41.620498  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.620506  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:41.620512  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:41.620568  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:41.653153  133241 cri.go:89] found id: ""
	I1210 01:12:41.653196  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.653209  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:41.653217  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:41.653275  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:41.685358  133241 cri.go:89] found id: ""
	I1210 01:12:41.685387  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.685395  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:41.685401  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:41.685459  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:41.715972  133241 cri.go:89] found id: ""
	I1210 01:12:41.715996  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.716004  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:41.716010  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:41.716058  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:41.750651  133241 cri.go:89] found id: ""
	I1210 01:12:41.750684  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.750695  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:41.750703  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:41.750781  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:41.788845  133241 cri.go:89] found id: ""
	I1210 01:12:41.788872  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.788882  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:41.788890  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:41.788953  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:41.821679  133241 cri.go:89] found id: ""
	I1210 01:12:41.821705  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.821716  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:41.821726  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:41.821741  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:41.873177  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:41.873207  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:41.885639  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:41.885663  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:41.954882  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:41.954906  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:41.954922  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:42.032868  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:42.032911  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:44.569896  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:44.582137  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:44.582239  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:44.613216  133241 cri.go:89] found id: ""
	I1210 01:12:44.613245  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.613255  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:44.613264  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:44.613326  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:43.820651  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:45.821203  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:43.755681  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:45.755860  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:44.642860  133241 cri.go:89] found id: ""
	I1210 01:12:44.642887  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.642897  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:44.642904  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:44.642961  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:44.675879  133241 cri.go:89] found id: ""
	I1210 01:12:44.675908  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.675920  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:44.675928  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:44.675992  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:44.705466  133241 cri.go:89] found id: ""
	I1210 01:12:44.705490  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.705499  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:44.705505  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:44.705552  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:44.740999  133241 cri.go:89] found id: ""
	I1210 01:12:44.741029  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.741038  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:44.741043  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:44.741101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:44.774933  133241 cri.go:89] found id: ""
	I1210 01:12:44.774963  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.774974  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:44.774981  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:44.775044  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:44.806061  133241 cri.go:89] found id: ""
	I1210 01:12:44.806085  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.806093  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:44.806100  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:44.806163  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:44.837759  133241 cri.go:89] found id: ""
	I1210 01:12:44.837781  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.837789  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:44.837797  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:44.837808  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:44.872830  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:44.872881  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:44.925476  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:44.925505  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:44.937814  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:44.937838  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:45.012002  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:45.012029  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:45.012046  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:47.589735  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:47.603668  133241 kubeadm.go:597] duration metric: took 4m3.306612686s to restartPrimaryControlPlane
	W1210 01:12:47.603739  133241 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 01:12:47.603761  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:12:48.154198  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:12:48.167608  133241 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:12:48.176803  133241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:12:48.185508  133241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:12:48.185527  133241 kubeadm.go:157] found existing configuration files:
	
	I1210 01:12:48.185572  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:12:48.193940  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:12:48.193992  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:12:48.202384  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:12:48.210626  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:12:48.210672  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:12:48.219377  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:12:48.227459  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:12:48.227493  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:12:48.235967  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:12:48.244142  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:12:48.244177  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:12:48.252961  133241 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:12:48.323011  133241 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 01:12:48.323104  133241 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:12:48.458259  133241 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:12:48.458424  133241 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:12:48.458536  133241 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 01:12:48.630626  133241 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:12:48.632393  133241 out.go:235]   - Generating certificates and keys ...
	I1210 01:12:48.632510  133241 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:12:48.632611  133241 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:12:48.633714  133241 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:12:48.633800  133241 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:12:48.633862  133241 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:12:48.633957  133241 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:12:48.634058  133241 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:12:48.634151  133241 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:12:48.634265  133241 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:12:48.634426  133241 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:12:48.634546  133241 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:12:48.634640  133241 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:12:48.756866  133241 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:12:48.885589  133241 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:12:49.551602  133241 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:12:49.667812  133241 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:12:49.683125  133241 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:12:49.684322  133241 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:12:49.684390  133241 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:12:49.830086  133241 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:12:48.322646  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:50.821218  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:47.756532  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:49.757416  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:52.256110  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:49.831618  133241 out.go:235]   - Booting up control plane ...
	I1210 01:12:49.831733  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:12:49.836164  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:12:49.837117  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:12:49.845538  133241 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:12:49.848331  133241 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 01:12:53.320607  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:55.321218  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:54.256922  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:56.755279  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:57.321409  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:59.321826  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:01.821159  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:58.757281  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:01.256065  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:05.297959  132693 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.512320802s)
	I1210 01:13:05.298031  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:13:05.321593  132693 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:13:05.334072  132693 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:13:05.346063  132693 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:13:05.346089  132693 kubeadm.go:157] found existing configuration files:
	
	I1210 01:13:05.346143  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:13:05.360019  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:13:05.360087  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:13:05.372583  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:13:05.384130  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:13:05.384188  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:13:05.392629  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:13:05.400642  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:13:05.400700  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:13:05.410803  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:13:05.419350  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:13:05.419390  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:13:05.429452  132693 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:13:05.481014  132693 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 01:13:05.481092  132693 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:13:05.597528  132693 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:13:05.597654  132693 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:13:05.597756  132693 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 01:13:05.612251  132693 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:13:05.613988  132693 out.go:235]   - Generating certificates and keys ...
	I1210 01:13:05.614052  132693 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:13:05.614111  132693 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:13:05.614207  132693 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:13:05.614297  132693 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:13:05.614409  132693 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:13:05.614477  132693 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:13:05.614568  132693 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:13:05.614645  132693 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:13:05.614739  132693 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:13:05.614860  132693 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:13:05.614923  132693 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:13:05.615007  132693 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:13:05.946241  132693 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:13:06.262996  132693 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 01:13:06.492684  132693 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:13:06.618787  132693 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:13:06.805590  132693 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:13:06.806311  132693 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:13:06.808813  132693 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:13:06.810481  132693 out.go:235]   - Booting up control plane ...
	I1210 01:13:06.810631  132693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:13:06.810746  132693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:13:06.810812  132693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:13:03.821406  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:05.821749  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:03.756325  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:06.257324  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:06.832919  132693 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:13:06.839052  132693 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:13:06.839096  132693 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:13:06.969474  132693 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 01:13:06.969623  132693 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 01:13:07.971413  132693 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001911774s
	I1210 01:13:07.971493  132693 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 01:13:07.822174  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:09.822828  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:12.473566  132693 kubeadm.go:310] [api-check] The API server is healthy after 4.502020736s
	I1210 01:13:12.487877  132693 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 01:13:12.501570  132693 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 01:13:12.529568  132693 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 01:13:12.529808  132693 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-274758 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 01:13:12.539578  132693 kubeadm.go:310] [bootstrap-token] Using token: tq1yzs.mz19z1mkmh869v39
	I1210 01:13:08.757580  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:11.256597  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:12.540687  132693 out.go:235]   - Configuring RBAC rules ...
	I1210 01:13:12.540830  132693 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 01:13:12.546018  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 01:13:12.554335  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 01:13:12.557480  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 01:13:12.562006  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 01:13:12.568058  132693 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 01:13:12.880502  132693 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 01:13:13.367386  132693 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 01:13:13.879413  132693 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 01:13:13.880417  132693 kubeadm.go:310] 
	I1210 01:13:13.880519  132693 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 01:13:13.880541  132693 kubeadm.go:310] 
	I1210 01:13:13.880619  132693 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 01:13:13.880629  132693 kubeadm.go:310] 
	I1210 01:13:13.880662  132693 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 01:13:13.880741  132693 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 01:13:13.880829  132693 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 01:13:13.880851  132693 kubeadm.go:310] 
	I1210 01:13:13.880930  132693 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 01:13:13.880943  132693 kubeadm.go:310] 
	I1210 01:13:13.881016  132693 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 01:13:13.881029  132693 kubeadm.go:310] 
	I1210 01:13:13.881114  132693 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 01:13:13.881255  132693 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 01:13:13.881326  132693 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 01:13:13.881334  132693 kubeadm.go:310] 
	I1210 01:13:13.881429  132693 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 01:13:13.881542  132693 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 01:13:13.881553  132693 kubeadm.go:310] 
	I1210 01:13:13.881680  132693 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tq1yzs.mz19z1mkmh869v39 \
	I1210 01:13:13.881815  132693 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 \
	I1210 01:13:13.881843  132693 kubeadm.go:310] 	--control-plane 
	I1210 01:13:13.881854  132693 kubeadm.go:310] 
	I1210 01:13:13.881973  132693 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 01:13:13.881982  132693 kubeadm.go:310] 
	I1210 01:13:13.882072  132693 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tq1yzs.mz19z1mkmh869v39 \
	I1210 01:13:13.882230  132693 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 
	I1210 01:13:13.883146  132693 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:13:13.883196  132693 cni.go:84] Creating CNI manager for ""
	I1210 01:13:13.883217  132693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:13:13.885371  132693 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:13:13.886543  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:13:13.897482  132693 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:13:13.915107  132693 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 01:13:13.915244  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:13.915242  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-274758 minikube.k8s.io/updated_at=2024_12_10T01_13_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=embed-certs-274758 minikube.k8s.io/primary=true
	I1210 01:13:13.928635  132693 ops.go:34] apiserver oom_adj: -16
	I1210 01:13:14.131983  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:14.633015  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:15.132113  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:15.632347  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:16.132367  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:16.632749  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:12.321479  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:14.321663  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:16.822120  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:13.756549  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:15.751204  133282 pod_ready.go:82] duration metric: took 4m0.000700419s for pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace to be "Ready" ...
	E1210 01:13:15.751234  133282 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 01:13:15.751259  133282 pod_ready.go:39] duration metric: took 4m6.019142998s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:15.751290  133282 kubeadm.go:597] duration metric: took 4m13.842336769s to restartPrimaryControlPlane
	W1210 01:13:15.751381  133282 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 01:13:15.751413  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:13:17.132359  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:17.632050  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:18.132263  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:18.225462  132693 kubeadm.go:1113] duration metric: took 4.310260508s to wait for elevateKubeSystemPrivileges
	I1210 01:13:18.225504  132693 kubeadm.go:394] duration metric: took 4m55.046897812s to StartCluster
	I1210 01:13:18.225547  132693 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:18.225650  132693 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:13:18.227523  132693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:18.227776  132693 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 01:13:18.227852  132693 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 01:13:18.227928  132693 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-274758"
	I1210 01:13:18.227962  132693 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-274758"
	I1210 01:13:18.227961  132693 addons.go:69] Setting default-storageclass=true in profile "embed-certs-274758"
	I1210 01:13:18.227999  132693 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-274758"
	I1210 01:13:18.228012  132693 config.go:182] Loaded profile config "embed-certs-274758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1210 01:13:18.227973  132693 addons.go:243] addon storage-provisioner should already be in state true
	I1210 01:13:18.227983  132693 addons.go:69] Setting metrics-server=true in profile "embed-certs-274758"
	I1210 01:13:18.228079  132693 addons.go:234] Setting addon metrics-server=true in "embed-certs-274758"
	W1210 01:13:18.228096  132693 addons.go:243] addon metrics-server should already be in state true
	I1210 01:13:18.228130  132693 host.go:66] Checking if "embed-certs-274758" exists ...
	I1210 01:13:18.228085  132693 host.go:66] Checking if "embed-certs-274758" exists ...
	I1210 01:13:18.228468  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.228508  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.228521  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.228554  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.228608  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.228660  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.229260  132693 out.go:177] * Verifying Kubernetes components...
	I1210 01:13:18.230643  132693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:13:18.244916  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45913
	I1210 01:13:18.245098  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I1210 01:13:18.245389  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.245571  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.246186  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.246210  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.246288  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43885
	I1210 01:13:18.246344  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.246364  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.246598  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.246769  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.246771  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.246825  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.247215  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.247242  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.247367  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.247418  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.247638  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.248206  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.248244  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.250542  132693 addons.go:234] Setting addon default-storageclass=true in "embed-certs-274758"
	W1210 01:13:18.250579  132693 addons.go:243] addon default-storageclass should already be in state true
	I1210 01:13:18.250614  132693 host.go:66] Checking if "embed-certs-274758" exists ...
	I1210 01:13:18.250951  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.250999  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.265194  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36375
	I1210 01:13:18.265779  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43577
	I1210 01:13:18.266283  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.266478  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.267212  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.267234  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.267302  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40845
	I1210 01:13:18.267316  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.267329  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.267647  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.267700  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.268228  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.268248  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.268250  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.268276  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.268319  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.268679  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.268889  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.269065  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.271273  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:13:18.271495  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:13:18.272879  132693 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:13:18.272898  132693 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 01:13:18.274238  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 01:13:18.274260  132693 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 01:13:18.274279  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:13:18.274371  132693 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:18.274394  132693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 01:13:18.274411  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:13:18.278685  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.279199  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:13:18.279245  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.279405  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:13:18.279557  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:13:18.279684  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:13:18.279823  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:13:18.280345  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.281064  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:13:18.281083  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:13:18.281095  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.281282  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:13:18.281455  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:13:18.281643  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:13:18.285915  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36787
	I1210 01:13:18.286306  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.286727  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.286745  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.287055  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.287234  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.288732  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:13:18.288930  132693 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:18.288945  132693 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 01:13:18.288962  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:13:18.291528  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.291801  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:13:18.291821  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.291990  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:13:18.292175  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:13:18.292303  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:13:18.292532  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:13:18.426704  132693 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:13:18.454857  132693 node_ready.go:35] waiting up to 6m0s for node "embed-certs-274758" to be "Ready" ...
	I1210 01:13:18.470552  132693 node_ready.go:49] node "embed-certs-274758" has status "Ready":"True"
	I1210 01:13:18.470590  132693 node_ready.go:38] duration metric: took 15.702625ms for node "embed-certs-274758" to be "Ready" ...
	I1210 01:13:18.470604  132693 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:18.480748  132693 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bgjgh" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:18.569014  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 01:13:18.569040  132693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 01:13:18.605108  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 01:13:18.605137  132693 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 01:13:18.606158  132693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:18.614827  132693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:18.647542  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:18.647573  132693 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 01:13:18.726060  132693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:19.536876  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.536905  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.536988  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.537020  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.537177  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537215  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.537223  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.537234  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.537239  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537252  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.537261  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.537269  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.537324  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.537524  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537623  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.537922  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.537957  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537981  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.556234  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.556255  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.556555  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.556567  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.556572  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.977786  132693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.251679295s)
	I1210 01:13:19.977848  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.977861  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.978210  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.978227  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.978253  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.978288  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.978297  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.978536  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.978557  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.978581  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.978593  132693 addons.go:475] Verifying addon metrics-server=true in "embed-certs-274758"
	I1210 01:13:19.980096  132693 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 01:13:19.981147  132693 addons.go:510] duration metric: took 1.753302974s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 01:13:20.487221  132693 pod_ready.go:93] pod "coredns-7c65d6cfc9-bgjgh" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:20.487244  132693 pod_ready.go:82] duration metric: took 2.006464893s for pod "coredns-7c65d6cfc9-bgjgh" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:20.487253  132693 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:18.822687  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:21.322845  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:22.493358  132693 pod_ready.go:103] pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:24.993203  132693 pod_ready.go:103] pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:25.492646  132693 pod_ready.go:93] pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.492669  132693 pod_ready.go:82] duration metric: took 5.005410057s for pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.492679  132693 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.497102  132693 pod_ready.go:93] pod "etcd-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.497119  132693 pod_ready.go:82] duration metric: took 4.434391ms for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.497126  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.501166  132693 pod_ready.go:93] pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.501181  132693 pod_ready.go:82] duration metric: took 4.048875ms for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.501189  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.505541  132693 pod_ready.go:93] pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.505565  132693 pod_ready.go:82] duration metric: took 4.369889ms for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.505579  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-v28mz" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.509548  132693 pod_ready.go:93] pod "kube-proxy-v28mz" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.509562  132693 pod_ready.go:82] duration metric: took 3.977138ms for pod "kube-proxy-v28mz" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.509568  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:23.322966  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:25.820854  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:27.517005  132693 pod_ready.go:93] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:27.517027  132693 pod_ready.go:82] duration metric: took 2.007452032s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:27.517035  132693 pod_ready.go:39] duration metric: took 9.046411107s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:27.517052  132693 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:13:27.517101  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:13:27.531721  132693 api_server.go:72] duration metric: took 9.303907779s to wait for apiserver process to appear ...
	I1210 01:13:27.531750  132693 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:13:27.531768  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:13:27.536509  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 200:
	ok
	I1210 01:13:27.537428  132693 api_server.go:141] control plane version: v1.31.2
	I1210 01:13:27.537448  132693 api_server.go:131] duration metric: took 5.691563ms to wait for apiserver health ...
	I1210 01:13:27.537462  132693 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:13:27.693218  132693 system_pods.go:59] 9 kube-system pods found
	I1210 01:13:27.693251  132693 system_pods.go:61] "coredns-7c65d6cfc9-bgjgh" [277d23ef-ff20-414d-beb6-c6982712a423] Running
	I1210 01:13:27.693257  132693 system_pods.go:61] "coredns-7c65d6cfc9-m4qgb" [41253d1b-c010-41e2-9286-e9930025e9ff] Running
	I1210 01:13:27.693265  132693 system_pods.go:61] "etcd-embed-certs-274758" [b0c51f0d-a638-4f4f-b3df-95c7794facc9] Running
	I1210 01:13:27.693269  132693 system_pods.go:61] "kube-apiserver-embed-certs-274758" [ecdc5ff5-4d07-4802-98b8-8382132f7748] Running
	I1210 01:13:27.693273  132693 system_pods.go:61] "kube-controller-manager-embed-certs-274758" [e42fce81-4a93-498b-8897-31387873d181] Running
	I1210 01:13:27.693276  132693 system_pods.go:61] "kube-proxy-v28mz" [5cd47cc1-a085-4e77-850d-dde0c8ed6054] Running
	I1210 01:13:27.693279  132693 system_pods.go:61] "kube-scheduler-embed-certs-274758" [609e1329-cd71-4772-96cc-24b3620c511d] Running
	I1210 01:13:27.693285  132693 system_pods.go:61] "metrics-server-6867b74b74-mcw2c" [a7b75933-124c-4577-b26a-ad1c5c128910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:13:27.693289  132693 system_pods.go:61] "storage-provisioner" [71e4d38f-b0fe-43cf-a844-ba787287fda6] Running
	I1210 01:13:27.693296  132693 system_pods.go:74] duration metric: took 155.828167ms to wait for pod list to return data ...
	I1210 01:13:27.693305  132693 default_sa.go:34] waiting for default service account to be created ...
	I1210 01:13:27.891018  132693 default_sa.go:45] found service account: "default"
	I1210 01:13:27.891046  132693 default_sa.go:55] duration metric: took 197.731166ms for default service account to be created ...
	I1210 01:13:27.891055  132693 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 01:13:28.095967  132693 system_pods.go:86] 9 kube-system pods found
	I1210 01:13:28.095996  132693 system_pods.go:89] "coredns-7c65d6cfc9-bgjgh" [277d23ef-ff20-414d-beb6-c6982712a423] Running
	I1210 01:13:28.096002  132693 system_pods.go:89] "coredns-7c65d6cfc9-m4qgb" [41253d1b-c010-41e2-9286-e9930025e9ff] Running
	I1210 01:13:28.096006  132693 system_pods.go:89] "etcd-embed-certs-274758" [b0c51f0d-a638-4f4f-b3df-95c7794facc9] Running
	I1210 01:13:28.096010  132693 system_pods.go:89] "kube-apiserver-embed-certs-274758" [ecdc5ff5-4d07-4802-98b8-8382132f7748] Running
	I1210 01:13:28.096014  132693 system_pods.go:89] "kube-controller-manager-embed-certs-274758" [e42fce81-4a93-498b-8897-31387873d181] Running
	I1210 01:13:28.096017  132693 system_pods.go:89] "kube-proxy-v28mz" [5cd47cc1-a085-4e77-850d-dde0c8ed6054] Running
	I1210 01:13:28.096021  132693 system_pods.go:89] "kube-scheduler-embed-certs-274758" [609e1329-cd71-4772-96cc-24b3620c511d] Running
	I1210 01:13:28.096027  132693 system_pods.go:89] "metrics-server-6867b74b74-mcw2c" [a7b75933-124c-4577-b26a-ad1c5c128910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:13:28.096031  132693 system_pods.go:89] "storage-provisioner" [71e4d38f-b0fe-43cf-a844-ba787287fda6] Running
	I1210 01:13:28.096039  132693 system_pods.go:126] duration metric: took 204.97831ms to wait for k8s-apps to be running ...
	I1210 01:13:28.096047  132693 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 01:13:28.096091  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:13:28.109766  132693 system_svc.go:56] duration metric: took 13.710817ms WaitForService to wait for kubelet
	I1210 01:13:28.109807  132693 kubeadm.go:582] duration metric: took 9.881998931s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:13:28.109831  132693 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:13:28.290402  132693 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:13:28.290444  132693 node_conditions.go:123] node cpu capacity is 2
	I1210 01:13:28.290457  132693 node_conditions.go:105] duration metric: took 180.620817ms to run NodePressure ...
	I1210 01:13:28.290472  132693 start.go:241] waiting for startup goroutines ...
	I1210 01:13:28.290478  132693 start.go:246] waiting for cluster config update ...
	I1210 01:13:28.290489  132693 start.go:255] writing updated cluster config ...
	I1210 01:13:28.290756  132693 ssh_runner.go:195] Run: rm -f paused
	I1210 01:13:28.341573  132693 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 01:13:28.343695  132693 out.go:177] * Done! kubectl is now configured to use "embed-certs-274758" cluster and "default" namespace by default
	I1210 01:13:28.321957  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:30.821091  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:29.849672  133241 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 01:13:29.850163  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:13:29.850412  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:13:33.322460  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:35.822120  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:34.850843  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:13:34.851064  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:13:38.321590  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:40.322421  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:41.903973  133282 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.152536348s)
	I1210 01:13:41.904058  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:13:41.922104  133282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:13:41.932781  133282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:13:41.949147  133282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:13:41.949169  133282 kubeadm.go:157] found existing configuration files:
	
	I1210 01:13:41.949234  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 01:13:41.961475  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:13:41.961531  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:13:41.973790  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 01:13:41.985658  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:13:41.985718  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:13:41.996851  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 01:13:42.005612  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:13:42.005661  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:13:42.016316  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 01:13:42.025097  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:13:42.025162  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:13:42.035841  133282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:13:42.204343  133282 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:13:42.820637  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:44.821863  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:46.822010  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:44.851525  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:13:44.851699  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:13:50.610797  133282 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 01:13:50.610879  133282 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:13:50.610976  133282 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:13:50.611138  133282 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:13:50.611235  133282 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 01:13:50.611363  133282 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:13:50.612870  133282 out.go:235]   - Generating certificates and keys ...
	I1210 01:13:50.612937  133282 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:13:50.612990  133282 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:13:50.613065  133282 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:13:50.613142  133282 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:13:50.613213  133282 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:13:50.613291  133282 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:13:50.613383  133282 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:13:50.613468  133282 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:13:50.613583  133282 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:13:50.613711  133282 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:13:50.613784  133282 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:13:50.613871  133282 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:13:50.613951  133282 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:13:50.614035  133282 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 01:13:50.614113  133282 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:13:50.614231  133282 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:13:50.614318  133282 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:13:50.614396  133282 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:13:50.614483  133282 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:13:50.615840  133282 out.go:235]   - Booting up control plane ...
	I1210 01:13:50.615917  133282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:13:50.615985  133282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:13:50.616068  133282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:13:50.616186  133282 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:13:50.616283  133282 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:13:50.616354  133282 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:13:50.616529  133282 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 01:13:50.616677  133282 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 01:13:50.616752  133282 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002388771s
	I1210 01:13:50.616858  133282 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 01:13:50.616942  133282 kubeadm.go:310] [api-check] The API server is healthy after 4.501731998s
	I1210 01:13:50.617063  133282 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 01:13:50.617214  133282 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 01:13:50.617302  133282 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 01:13:50.617556  133282 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-901295 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 01:13:50.617633  133282 kubeadm.go:310] [bootstrap-token] Using token: qm0b8q.vohlzpntqihfsj2x
	I1210 01:13:50.618774  133282 out.go:235]   - Configuring RBAC rules ...
	I1210 01:13:50.618896  133282 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 01:13:50.619001  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 01:13:50.619167  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 01:13:50.619286  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 01:13:50.619432  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 01:13:50.619563  133282 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 01:13:50.619724  133282 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 01:13:50.619788  133282 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 01:13:50.619855  133282 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 01:13:50.619865  133282 kubeadm.go:310] 
	I1210 01:13:50.619958  133282 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 01:13:50.619970  133282 kubeadm.go:310] 
	I1210 01:13:50.620071  133282 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 01:13:50.620084  133282 kubeadm.go:310] 
	I1210 01:13:50.620133  133282 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 01:13:50.620214  133282 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 01:13:50.620290  133282 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 01:13:50.620299  133282 kubeadm.go:310] 
	I1210 01:13:50.620384  133282 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 01:13:50.620393  133282 kubeadm.go:310] 
	I1210 01:13:50.620464  133282 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 01:13:50.620480  133282 kubeadm.go:310] 
	I1210 01:13:50.620554  133282 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 01:13:50.620656  133282 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 01:13:50.620747  133282 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 01:13:50.620756  133282 kubeadm.go:310] 
	I1210 01:13:50.620862  133282 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 01:13:50.620978  133282 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 01:13:50.620994  133282 kubeadm.go:310] 
	I1210 01:13:50.621111  133282 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token qm0b8q.vohlzpntqihfsj2x \
	I1210 01:13:50.621255  133282 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 \
	I1210 01:13:50.621286  133282 kubeadm.go:310] 	--control-plane 
	I1210 01:13:50.621296  133282 kubeadm.go:310] 
	I1210 01:13:50.621365  133282 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 01:13:50.621374  133282 kubeadm.go:310] 
	I1210 01:13:50.621448  133282 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token qm0b8q.vohlzpntqihfsj2x \
	I1210 01:13:50.621569  133282 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 
	I1210 01:13:50.621593  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:13:50.621608  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:13:50.622943  133282 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:13:49.321854  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:51.815742  132605 pod_ready.go:82] duration metric: took 4m0.000382174s for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	E1210 01:13:51.815774  132605 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1210 01:13:51.815787  132605 pod_ready.go:39] duration metric: took 4m2.800798949s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:51.815811  132605 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:13:51.815854  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:13:51.815920  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:13:51.865972  132605 cri.go:89] found id: "0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:51.866004  132605 cri.go:89] found id: ""
	I1210 01:13:51.866015  132605 logs.go:282] 1 containers: [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c]
	I1210 01:13:51.866098  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.871589  132605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:13:51.871648  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:13:51.909231  132605 cri.go:89] found id: "bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:51.909256  132605 cri.go:89] found id: ""
	I1210 01:13:51.909266  132605 logs.go:282] 1 containers: [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490]
	I1210 01:13:51.909321  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.913562  132605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:13:51.913639  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:13:51.946623  132605 cri.go:89] found id: "7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:51.946651  132605 cri.go:89] found id: ""
	I1210 01:13:51.946661  132605 logs.go:282] 1 containers: [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04]
	I1210 01:13:51.946721  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.950686  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:13:51.950756  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:13:51.988821  132605 cri.go:89] found id: "c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:51.988845  132605 cri.go:89] found id: ""
	I1210 01:13:51.988856  132605 logs.go:282] 1 containers: [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692]
	I1210 01:13:51.988916  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.992776  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:13:51.992827  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:13:52.028882  132605 cri.go:89] found id: "eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:52.028910  132605 cri.go:89] found id: ""
	I1210 01:13:52.028920  132605 logs.go:282] 1 containers: [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77]
	I1210 01:13:52.028974  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.033384  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:13:52.033467  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:13:52.068002  132605 cri.go:89] found id: "7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:52.068030  132605 cri.go:89] found id: ""
	I1210 01:13:52.068038  132605 logs.go:282] 1 containers: [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f]
	I1210 01:13:52.068086  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.071868  132605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:13:52.071938  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:13:52.105726  132605 cri.go:89] found id: ""
	I1210 01:13:52.105751  132605 logs.go:282] 0 containers: []
	W1210 01:13:52.105760  132605 logs.go:284] No container was found matching "kindnet"
	I1210 01:13:52.105767  132605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 01:13:52.105822  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 01:13:52.146662  132605 cri.go:89] found id: "8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:52.146690  132605 cri.go:89] found id: "abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:52.146696  132605 cri.go:89] found id: ""
	I1210 01:13:52.146706  132605 logs.go:282] 2 containers: [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0]
	I1210 01:13:52.146769  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.150459  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.153921  132605 logs.go:123] Gathering logs for kube-proxy [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77] ...
	I1210 01:13:52.153942  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:52.197327  132605 logs.go:123] Gathering logs for kube-controller-manager [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f] ...
	I1210 01:13:52.197354  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:50.624049  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:13:50.634300  133282 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:13:50.650835  133282 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 01:13:50.650955  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:50.650957  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-901295 minikube.k8s.io/updated_at=2024_12_10T01_13_50_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=default-k8s-diff-port-901295 minikube.k8s.io/primary=true
	I1210 01:13:50.661855  133282 ops.go:34] apiserver oom_adj: -16
	I1210 01:13:50.846244  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:51.347288  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:51.846690  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:52.346721  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:52.846891  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:53.346360  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:53.846284  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:54.346480  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:54.846394  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:54.950848  133282 kubeadm.go:1113] duration metric: took 4.299939675s to wait for elevateKubeSystemPrivileges
	I1210 01:13:54.950893  133282 kubeadm.go:394] duration metric: took 4m53.095365109s to StartCluster
	I1210 01:13:54.950920  133282 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:54.951018  133282 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:13:54.952642  133282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:54.952903  133282 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 01:13:54.953028  133282 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 01:13:54.953103  133282 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-901295"
	I1210 01:13:54.953122  133282 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-901295"
	W1210 01:13:54.953130  133282 addons.go:243] addon storage-provisioner should already be in state true
	I1210 01:13:54.953144  133282 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-901295"
	I1210 01:13:54.953165  133282 host.go:66] Checking if "default-k8s-diff-port-901295" exists ...
	I1210 01:13:54.953165  133282 config.go:182] Loaded profile config "default-k8s-diff-port-901295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:13:54.953164  133282 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-901295"
	I1210 01:13:54.953175  133282 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-901295"
	I1210 01:13:54.953188  133282 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-901295"
	W1210 01:13:54.953197  133282 addons.go:243] addon metrics-server should already be in state true
	I1210 01:13:54.953236  133282 host.go:66] Checking if "default-k8s-diff-port-901295" exists ...
	I1210 01:13:54.953502  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.953544  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.953604  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.953648  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.953611  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.953720  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.954470  133282 out.go:177] * Verifying Kubernetes components...
	I1210 01:13:54.955825  133282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:13:54.969471  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43591
	I1210 01:13:54.969539  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42601
	I1210 01:13:54.969905  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.969971  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.970407  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.970427  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.970539  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.970606  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.970834  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.970902  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.971282  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.971314  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.971457  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.971503  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.971615  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33149
	I1210 01:13:54.971975  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.972424  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.972451  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.972757  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.972939  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:54.976290  133282 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-901295"
	W1210 01:13:54.976313  133282 addons.go:243] addon default-storageclass should already be in state true
	I1210 01:13:54.976344  133282 host.go:66] Checking if "default-k8s-diff-port-901295" exists ...
	I1210 01:13:54.976701  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.976743  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.987931  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35367
	I1210 01:13:54.988409  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.988950  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.988975  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.989395  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.989602  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:54.990179  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35279
	I1210 01:13:54.990660  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.991231  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.991256  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.991553  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:13:54.991804  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.991988  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:54.993375  133282 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 01:13:54.993895  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:13:54.993895  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38115
	I1210 01:13:54.994363  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.994661  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 01:13:54.994675  133282 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 01:13:54.994690  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:13:54.994864  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.994882  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.995298  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.995379  133282 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:13:54.995834  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.995881  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.996682  133282 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:54.996704  133282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 01:13:54.996721  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:13:55.000015  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000319  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:13:55.000343  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000361  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000321  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:13:55.000540  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:13:55.000637  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:13:55.000658  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000689  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:13:55.000819  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:13:55.000955  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:13:55.001529  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:13:55.001896  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:13:55.002167  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:13:55.013310  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45905
	I1210 01:13:55.013700  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:55.014199  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:55.014219  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:55.014556  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:55.014997  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:55.016445  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:13:55.016626  133282 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:55.016642  133282 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 01:13:55.016659  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:13:55.018941  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.019337  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:13:55.019358  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.019578  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:13:55.019718  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:13:55.019807  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:13:55.019887  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:13:55.152197  133282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:13:55.175962  133282 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-901295" to be "Ready" ...
	I1210 01:13:55.185748  133282 node_ready.go:49] node "default-k8s-diff-port-901295" has status "Ready":"True"
	I1210 01:13:55.185767  133282 node_ready.go:38] duration metric: took 9.765238ms for node "default-k8s-diff-port-901295" to be "Ready" ...
	I1210 01:13:55.185776  133282 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:55.193102  133282 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:55.268186  133282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:55.294420  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 01:13:55.294451  133282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 01:13:55.326324  133282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:55.338979  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 01:13:55.339009  133282 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 01:13:55.393682  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:55.393713  133282 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 01:13:55.482637  133282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:56.131482  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.131574  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.131524  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.131650  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.132095  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Closing plugin on server side
	I1210 01:13:56.132112  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132129  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.132133  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132138  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.132140  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.132148  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.132149  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.132207  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.132384  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132397  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.132501  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Closing plugin on server side
	I1210 01:13:56.132565  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132579  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.155188  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.155211  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.155515  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.155535  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.795811  133282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.313113399s)
	I1210 01:13:56.795879  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.795895  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.796326  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Closing plugin on server side
	I1210 01:13:56.796327  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.796353  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.796367  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.796379  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.796612  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.796628  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.796641  133282 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-901295"
	I1210 01:13:56.798189  133282 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 01:13:52.256305  132605 logs.go:123] Gathering logs for dmesg ...
	I1210 01:13:52.256333  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:13:52.269263  132605 logs.go:123] Gathering logs for kube-apiserver [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c] ...
	I1210 01:13:52.269288  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:52.310821  132605 logs.go:123] Gathering logs for coredns [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04] ...
	I1210 01:13:52.310855  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:52.348176  132605 logs.go:123] Gathering logs for kube-scheduler [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692] ...
	I1210 01:13:52.348204  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:52.399357  132605 logs.go:123] Gathering logs for storage-provisioner [abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0] ...
	I1210 01:13:52.399392  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:52.436240  132605 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:13:52.436272  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:13:52.962153  132605 logs.go:123] Gathering logs for container status ...
	I1210 01:13:52.962192  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:13:53.010091  132605 logs.go:123] Gathering logs for kubelet ...
	I1210 01:13:53.010127  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:13:53.082183  132605 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:13:53.082218  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:13:53.201521  132605 logs.go:123] Gathering logs for etcd [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490] ...
	I1210 01:13:53.201557  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:53.243675  132605 logs.go:123] Gathering logs for storage-provisioner [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6] ...
	I1210 01:13:53.243711  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:55.779907  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:13:55.796284  132605 api_server.go:72] duration metric: took 4m14.500959712s to wait for apiserver process to appear ...
	I1210 01:13:55.796314  132605 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:13:55.796358  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:13:55.796431  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:13:55.839067  132605 cri.go:89] found id: "0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:55.839098  132605 cri.go:89] found id: ""
	I1210 01:13:55.839107  132605 logs.go:282] 1 containers: [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c]
	I1210 01:13:55.839175  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.843310  132605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:13:55.843382  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:13:55.875863  132605 cri.go:89] found id: "bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:55.875888  132605 cri.go:89] found id: ""
	I1210 01:13:55.875896  132605 logs.go:282] 1 containers: [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490]
	I1210 01:13:55.875960  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.879748  132605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:13:55.879819  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:13:55.911243  132605 cri.go:89] found id: "7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:55.911269  132605 cri.go:89] found id: ""
	I1210 01:13:55.911279  132605 logs.go:282] 1 containers: [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04]
	I1210 01:13:55.911342  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.915201  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:13:55.915268  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:13:55.966280  132605 cri.go:89] found id: "c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:55.966308  132605 cri.go:89] found id: ""
	I1210 01:13:55.966318  132605 logs.go:282] 1 containers: [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692]
	I1210 01:13:55.966384  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.970278  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:13:55.970354  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:13:56.004675  132605 cri.go:89] found id: "eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:56.004706  132605 cri.go:89] found id: ""
	I1210 01:13:56.004722  132605 logs.go:282] 1 containers: [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77]
	I1210 01:13:56.004785  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.008534  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:13:56.008614  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:13:56.051252  132605 cri.go:89] found id: "7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:56.051282  132605 cri.go:89] found id: ""
	I1210 01:13:56.051293  132605 logs.go:282] 1 containers: [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f]
	I1210 01:13:56.051356  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.055160  132605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:13:56.055243  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:13:56.100629  132605 cri.go:89] found id: ""
	I1210 01:13:56.100660  132605 logs.go:282] 0 containers: []
	W1210 01:13:56.100672  132605 logs.go:284] No container was found matching "kindnet"
	I1210 01:13:56.100681  132605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 01:13:56.100749  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 01:13:56.140250  132605 cri.go:89] found id: "8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:56.140274  132605 cri.go:89] found id: "abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:56.140280  132605 cri.go:89] found id: ""
	I1210 01:13:56.140290  132605 logs.go:282] 2 containers: [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0]
	I1210 01:13:56.140352  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.145225  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.150128  132605 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:13:56.150151  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:13:56.273696  132605 logs.go:123] Gathering logs for kube-apiserver [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c] ...
	I1210 01:13:56.273730  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:56.323851  132605 logs.go:123] Gathering logs for etcd [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490] ...
	I1210 01:13:56.323884  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:56.375726  132605 logs.go:123] Gathering logs for kube-scheduler [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692] ...
	I1210 01:13:56.375763  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:56.430544  132605 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:13:56.430587  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:13:56.866412  132605 logs.go:123] Gathering logs for storage-provisioner [abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0] ...
	I1210 01:13:56.866505  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:56.901321  132605 logs.go:123] Gathering logs for container status ...
	I1210 01:13:56.901360  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:13:56.940068  132605 logs.go:123] Gathering logs for kubelet ...
	I1210 01:13:56.940107  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:13:57.010688  132605 logs.go:123] Gathering logs for dmesg ...
	I1210 01:13:57.010725  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:13:57.025463  132605 logs.go:123] Gathering logs for coredns [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04] ...
	I1210 01:13:57.025514  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:57.063908  132605 logs.go:123] Gathering logs for kube-proxy [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77] ...
	I1210 01:13:57.063939  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:57.102140  132605 logs.go:123] Gathering logs for kube-controller-manager [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f] ...
	I1210 01:13:57.102182  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:57.154429  132605 logs.go:123] Gathering logs for storage-provisioner [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6] ...
	I1210 01:13:57.154467  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:56.799397  133282 addons.go:510] duration metric: took 1.846376359s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 01:13:57.200860  133282 pod_ready.go:103] pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:59.697834  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:13:59.702097  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 200:
	ok
	I1210 01:13:59.703338  132605 api_server.go:141] control plane version: v1.31.2
	I1210 01:13:59.703360  132605 api_server.go:131] duration metric: took 3.907039005s to wait for apiserver health ...
	I1210 01:13:59.703368  132605 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:13:59.703389  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:13:59.703430  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:13:59.746795  132605 cri.go:89] found id: "0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:59.746815  132605 cri.go:89] found id: ""
	I1210 01:13:59.746822  132605 logs.go:282] 1 containers: [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c]
	I1210 01:13:59.746867  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.750673  132605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:13:59.750736  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:13:59.783121  132605 cri.go:89] found id: "bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:59.783154  132605 cri.go:89] found id: ""
	I1210 01:13:59.783163  132605 logs.go:282] 1 containers: [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490]
	I1210 01:13:59.783210  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.786822  132605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:13:59.786875  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:13:59.819075  132605 cri.go:89] found id: "7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:59.819096  132605 cri.go:89] found id: ""
	I1210 01:13:59.819103  132605 logs.go:282] 1 containers: [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04]
	I1210 01:13:59.819163  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.822836  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:13:59.822886  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:13:59.859388  132605 cri.go:89] found id: "c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:59.859418  132605 cri.go:89] found id: ""
	I1210 01:13:59.859428  132605 logs.go:282] 1 containers: [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692]
	I1210 01:13:59.859482  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.863388  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:13:59.863447  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:13:59.897967  132605 cri.go:89] found id: "eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:59.897987  132605 cri.go:89] found id: ""
	I1210 01:13:59.897994  132605 logs.go:282] 1 containers: [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77]
	I1210 01:13:59.898037  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.902198  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:13:59.902262  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:13:59.935685  132605 cri.go:89] found id: "7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:59.935713  132605 cri.go:89] found id: ""
	I1210 01:13:59.935724  132605 logs.go:282] 1 containers: [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f]
	I1210 01:13:59.935782  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.939600  132605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:13:59.939653  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:13:59.975763  132605 cri.go:89] found id: ""
	I1210 01:13:59.975797  132605 logs.go:282] 0 containers: []
	W1210 01:13:59.975810  132605 logs.go:284] No container was found matching "kindnet"
	I1210 01:13:59.975819  132605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 01:13:59.975886  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 01:14:00.014470  132605 cri.go:89] found id: "8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:14:00.014500  132605 cri.go:89] found id: "abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:14:00.014506  132605 cri.go:89] found id: ""
	I1210 01:14:00.014515  132605 logs.go:282] 2 containers: [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0]
	I1210 01:14:00.014589  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:14:00.018470  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:14:00.022628  132605 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:14:00.022650  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:14:00.126253  132605 logs.go:123] Gathering logs for kube-apiserver [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c] ...
	I1210 01:14:00.126280  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:14:00.168377  132605 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:14:00.168410  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:14:00.554305  132605 logs.go:123] Gathering logs for container status ...
	I1210 01:14:00.554349  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:14:00.597646  132605 logs.go:123] Gathering logs for kube-scheduler [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692] ...
	I1210 01:14:00.597673  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:14:00.638356  132605 logs.go:123] Gathering logs for kube-proxy [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77] ...
	I1210 01:14:00.638385  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:14:00.673027  132605 logs.go:123] Gathering logs for kube-controller-manager [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f] ...
	I1210 01:14:00.673058  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:14:00.736632  132605 logs.go:123] Gathering logs for storage-provisioner [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6] ...
	I1210 01:14:00.736667  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:14:00.771609  132605 logs.go:123] Gathering logs for kubelet ...
	I1210 01:14:00.771643  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:14:00.838511  132605 logs.go:123] Gathering logs for dmesg ...
	I1210 01:14:00.838542  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:14:00.853873  132605 logs.go:123] Gathering logs for etcd [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490] ...
	I1210 01:14:00.853901  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:14:00.903386  132605 logs.go:123] Gathering logs for coredns [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04] ...
	I1210 01:14:00.903417  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:14:00.940479  132605 logs.go:123] Gathering logs for storage-provisioner [abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0] ...
	I1210 01:14:00.940538  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:59.199815  133282 pod_ready.go:93] pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:59.199838  133282 pod_ready.go:82] duration metric: took 4.006706604s for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:59.199848  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:01.206809  133282 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:14:02.205417  133282 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:02.205439  133282 pod_ready.go:82] duration metric: took 3.005584799s for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:02.205449  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:03.479747  132605 system_pods.go:59] 8 kube-system pods found
	I1210 01:14:03.479776  132605 system_pods.go:61] "coredns-7c65d6cfc9-hhsm5" [dddb227a-7c16-4acd-be5f-1ab38b78129c] Running
	I1210 01:14:03.479781  132605 system_pods.go:61] "etcd-no-preload-584179" [acba2a13-196f-4ff9-8151-6b88578d532d] Running
	I1210 01:14:03.479785  132605 system_pods.go:61] "kube-apiserver-no-preload-584179" [20a67076-4ff2-4f31-b245-bf4079cd11d1] Running
	I1210 01:14:03.479789  132605 system_pods.go:61] "kube-controller-manager-no-preload-584179" [d7a2e531-1609-4c8a-b756-d9819339ed27] Running
	I1210 01:14:03.479791  132605 system_pods.go:61] "kube-proxy-xcjs2" [ec6cf5b1-3ea9-4868-874d-61e262cca0c5] Running
	I1210 01:14:03.479795  132605 system_pods.go:61] "kube-scheduler-no-preload-584179" [998543da-3056-4960-8762-9ab3dbd1925a] Running
	I1210 01:14:03.479800  132605 system_pods.go:61] "metrics-server-6867b74b74-lwgxd" [0e7f1063-8508-4f5b-b8ff-bbd387a53919] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:03.479804  132605 system_pods.go:61] "storage-provisioner" [31180637-f48e-4dda-8ec3-56155bb300cf] Running
	I1210 01:14:03.479813  132605 system_pods.go:74] duration metric: took 3.776438741s to wait for pod list to return data ...
	I1210 01:14:03.479820  132605 default_sa.go:34] waiting for default service account to be created ...
	I1210 01:14:03.482188  132605 default_sa.go:45] found service account: "default"
	I1210 01:14:03.482210  132605 default_sa.go:55] duration metric: took 2.383945ms for default service account to be created ...
	I1210 01:14:03.482218  132605 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 01:14:03.487172  132605 system_pods.go:86] 8 kube-system pods found
	I1210 01:14:03.487199  132605 system_pods.go:89] "coredns-7c65d6cfc9-hhsm5" [dddb227a-7c16-4acd-be5f-1ab38b78129c] Running
	I1210 01:14:03.487213  132605 system_pods.go:89] "etcd-no-preload-584179" [acba2a13-196f-4ff9-8151-6b88578d532d] Running
	I1210 01:14:03.487220  132605 system_pods.go:89] "kube-apiserver-no-preload-584179" [20a67076-4ff2-4f31-b245-bf4079cd11d1] Running
	I1210 01:14:03.487227  132605 system_pods.go:89] "kube-controller-manager-no-preload-584179" [d7a2e531-1609-4c8a-b756-d9819339ed27] Running
	I1210 01:14:03.487232  132605 system_pods.go:89] "kube-proxy-xcjs2" [ec6cf5b1-3ea9-4868-874d-61e262cca0c5] Running
	I1210 01:14:03.487239  132605 system_pods.go:89] "kube-scheduler-no-preload-584179" [998543da-3056-4960-8762-9ab3dbd1925a] Running
	I1210 01:14:03.487248  132605 system_pods.go:89] "metrics-server-6867b74b74-lwgxd" [0e7f1063-8508-4f5b-b8ff-bbd387a53919] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:03.487257  132605 system_pods.go:89] "storage-provisioner" [31180637-f48e-4dda-8ec3-56155bb300cf] Running
	I1210 01:14:03.487267  132605 system_pods.go:126] duration metric: took 5.043223ms to wait for k8s-apps to be running ...
	I1210 01:14:03.487278  132605 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 01:14:03.487331  132605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:14:03.503494  132605 system_svc.go:56] duration metric: took 16.208072ms WaitForService to wait for kubelet
	I1210 01:14:03.503520  132605 kubeadm.go:582] duration metric: took 4m22.208203921s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:14:03.503535  132605 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:14:03.506148  132605 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:14:03.506168  132605 node_conditions.go:123] node cpu capacity is 2
	I1210 01:14:03.506181  132605 node_conditions.go:105] duration metric: took 2.641093ms to run NodePressure ...
	I1210 01:14:03.506196  132605 start.go:241] waiting for startup goroutines ...
	I1210 01:14:03.506209  132605 start.go:246] waiting for cluster config update ...
	I1210 01:14:03.506228  132605 start.go:255] writing updated cluster config ...
	I1210 01:14:03.506542  132605 ssh_runner.go:195] Run: rm -f paused
	I1210 01:14:03.552082  132605 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 01:14:03.553885  132605 out.go:177] * Done! kubectl is now configured to use "no-preload-584179" cluster and "default" namespace by default
	I1210 01:14:04.212381  133282 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:14:05.212520  133282 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:05.212542  133282 pod_ready.go:82] duration metric: took 3.007086471s for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.212551  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mcrmk" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.218010  133282 pod_ready.go:93] pod "kube-proxy-mcrmk" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:05.218032  133282 pod_ready.go:82] duration metric: took 5.474042ms for pod "kube-proxy-mcrmk" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.218043  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.226656  133282 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:05.226677  133282 pod_ready.go:82] duration metric: took 8.62491ms for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.226685  133282 pod_ready.go:39] duration metric: took 10.040900009s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:14:05.226701  133282 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:14:05.226760  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:14:05.245203  133282 api_server.go:72] duration metric: took 10.292259038s to wait for apiserver process to appear ...
	I1210 01:14:05.245225  133282 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:14:05.245246  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:14:05.249103  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 200:
	ok
	I1210 01:14:05.250169  133282 api_server.go:141] control plane version: v1.31.2
	I1210 01:14:05.250186  133282 api_server.go:131] duration metric: took 4.954164ms to wait for apiserver health ...
	I1210 01:14:05.250191  133282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:14:05.256313  133282 system_pods.go:59] 9 kube-system pods found
	I1210 01:14:05.256338  133282 system_pods.go:61] "coredns-7c65d6cfc9-4snjr" [ee9574b0-7c13-4fd0-b268-47bef0687b7c] Running
	I1210 01:14:05.256343  133282 system_pods.go:61] "coredns-7c65d6cfc9-wr22x" [51e6d58d-7a5a-4739-94de-c53a8c8247ca] Running
	I1210 01:14:05.256347  133282 system_pods.go:61] "etcd-default-k8s-diff-port-901295" [3e38ffc7-a02d-4449-a166-29eca58e8545] Running
	I1210 01:14:05.256351  133282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-901295" [868a8472-8c93-4376-98c9-4d2bada6bfc0] Running
	I1210 01:14:05.256355  133282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-901295" [730047f5-2dbe-4a89-858a-33e2cc0f52eb] Running
	I1210 01:14:05.256358  133282 system_pods.go:61] "kube-proxy-mcrmk" [ffc0f612-5484-46b4-9515-41e0a981287f] Running
	I1210 01:14:05.256361  133282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-901295" [f5ae0336-bb5d-47ef-94f4-bfd4674adc8e] Running
	I1210 01:14:05.256366  133282 system_pods.go:61] "metrics-server-6867b74b74-rlg4g" [9aae955e-136b-4dbb-a5a5-f7490309bf4e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:05.256376  133282 system_pods.go:61] "storage-provisioner" [06a31677-c5d7-4380-80d3-ec80b787f570] Running
	I1210 01:14:05.256383  133282 system_pods.go:74] duration metric: took 6.186387ms to wait for pod list to return data ...
	I1210 01:14:05.256391  133282 default_sa.go:34] waiting for default service account to be created ...
	I1210 01:14:05.258701  133282 default_sa.go:45] found service account: "default"
	I1210 01:14:05.258720  133282 default_sa.go:55] duration metric: took 2.322746ms for default service account to be created ...
	I1210 01:14:05.258726  133282 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 01:14:05.262756  133282 system_pods.go:86] 9 kube-system pods found
	I1210 01:14:05.262776  133282 system_pods.go:89] "coredns-7c65d6cfc9-4snjr" [ee9574b0-7c13-4fd0-b268-47bef0687b7c] Running
	I1210 01:14:05.262781  133282 system_pods.go:89] "coredns-7c65d6cfc9-wr22x" [51e6d58d-7a5a-4739-94de-c53a8c8247ca] Running
	I1210 01:14:05.262785  133282 system_pods.go:89] "etcd-default-k8s-diff-port-901295" [3e38ffc7-a02d-4449-a166-29eca58e8545] Running
	I1210 01:14:05.262791  133282 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-901295" [868a8472-8c93-4376-98c9-4d2bada6bfc0] Running
	I1210 01:14:05.262795  133282 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-901295" [730047f5-2dbe-4a89-858a-33e2cc0f52eb] Running
	I1210 01:14:05.262799  133282 system_pods.go:89] "kube-proxy-mcrmk" [ffc0f612-5484-46b4-9515-41e0a981287f] Running
	I1210 01:14:05.262802  133282 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-901295" [f5ae0336-bb5d-47ef-94f4-bfd4674adc8e] Running
	I1210 01:14:05.262808  133282 system_pods.go:89] "metrics-server-6867b74b74-rlg4g" [9aae955e-136b-4dbb-a5a5-f7490309bf4e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:05.262812  133282 system_pods.go:89] "storage-provisioner" [06a31677-c5d7-4380-80d3-ec80b787f570] Running
	I1210 01:14:05.262821  133282 system_pods.go:126] duration metric: took 4.090244ms to wait for k8s-apps to be running ...
	I1210 01:14:05.262827  133282 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 01:14:05.262881  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:14:05.275937  133282 system_svc.go:56] duration metric: took 13.102664ms WaitForService to wait for kubelet
	I1210 01:14:05.275962  133282 kubeadm.go:582] duration metric: took 10.323025026s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:14:05.275984  133282 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:14:05.278184  133282 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:14:05.278204  133282 node_conditions.go:123] node cpu capacity is 2
	I1210 01:14:05.278217  133282 node_conditions.go:105] duration metric: took 2.226803ms to run NodePressure ...
	I1210 01:14:05.278230  133282 start.go:241] waiting for startup goroutines ...
	I1210 01:14:05.278239  133282 start.go:246] waiting for cluster config update ...
	I1210 01:14:05.278249  133282 start.go:255] writing updated cluster config ...
	I1210 01:14:05.278553  133282 ssh_runner.go:195] Run: rm -f paused
	I1210 01:14:05.326078  133282 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 01:14:05.327902  133282 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-901295" cluster and "default" namespace by default
	I1210 01:14:04.852302  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:14:04.852558  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:14:44.854749  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:14:44.854980  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:14:44.854992  133241 kubeadm.go:310] 
	I1210 01:14:44.855044  133241 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 01:14:44.855104  133241 kubeadm.go:310] 		timed out waiting for the condition
	I1210 01:14:44.855115  133241 kubeadm.go:310] 
	I1210 01:14:44.855162  133241 kubeadm.go:310] 	This error is likely caused by:
	I1210 01:14:44.855217  133241 kubeadm.go:310] 		- The kubelet is not running
	I1210 01:14:44.855363  133241 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 01:14:44.855380  133241 kubeadm.go:310] 
	I1210 01:14:44.855514  133241 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 01:14:44.855565  133241 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 01:14:44.855615  133241 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 01:14:44.855625  133241 kubeadm.go:310] 
	I1210 01:14:44.855796  133241 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 01:14:44.855943  133241 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 01:14:44.855955  133241 kubeadm.go:310] 
	I1210 01:14:44.856139  133241 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 01:14:44.856299  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 01:14:44.856402  133241 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 01:14:44.856500  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 01:14:44.856525  133241 kubeadm.go:310] 
	I1210 01:14:44.856764  133241 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:14:44.856891  133241 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 01:14:44.856987  133241 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1210 01:14:44.857195  133241 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 01:14:44.857249  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:14:45.319104  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:14:45.333243  133241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:14:45.342637  133241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:14:45.342653  133241 kubeadm.go:157] found existing configuration files:
	
	I1210 01:14:45.342696  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:14:45.351179  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:14:45.351227  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:14:45.359836  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:14:45.368986  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:14:45.369041  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:14:45.378166  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:14:45.387734  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:14:45.387781  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:14:45.397866  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:14:45.406757  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:14:45.406794  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:14:45.416506  133241 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:14:45.484342  133241 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 01:14:45.484462  133241 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:14:45.624435  133241 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:14:45.624583  133241 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:14:45.624732  133241 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 01:14:45.800410  133241 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:14:45.802184  133241 out.go:235]   - Generating certificates and keys ...
	I1210 01:14:45.802296  133241 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:14:45.802393  133241 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:14:45.802504  133241 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:14:45.802601  133241 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:14:45.802707  133241 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:14:45.802780  133241 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:14:45.802867  133241 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:14:45.803320  133241 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:14:45.804003  133241 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:14:45.804623  133241 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:14:45.804904  133241 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:14:45.804997  133241 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:14:45.989500  133241 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:14:46.228462  133241 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:14:46.274395  133241 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:14:46.765291  133241 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:14:46.784318  133241 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:14:46.785620  133241 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:14:46.785694  133241 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:14:46.915963  133241 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:14:46.917607  133241 out.go:235]   - Booting up control plane ...
	I1210 01:14:46.917714  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:14:46.924564  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:14:46.925924  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:14:46.926912  133241 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:14:46.929973  133241 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 01:15:26.932207  133241 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 01:15:26.932539  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:15:26.932718  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:15:31.933200  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:15:31.933463  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:15:41.934297  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:15:41.934592  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:16:01.935227  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:16:01.935409  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:16:41.934005  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:16:41.934329  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:16:41.934361  133241 kubeadm.go:310] 
	I1210 01:16:41.934433  133241 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 01:16:41.934492  133241 kubeadm.go:310] 		timed out waiting for the condition
	I1210 01:16:41.934500  133241 kubeadm.go:310] 
	I1210 01:16:41.934550  133241 kubeadm.go:310] 	This error is likely caused by:
	I1210 01:16:41.934610  133241 kubeadm.go:310] 		- The kubelet is not running
	I1210 01:16:41.934768  133241 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 01:16:41.934791  133241 kubeadm.go:310] 
	I1210 01:16:41.934915  133241 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 01:16:41.934971  133241 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 01:16:41.935024  133241 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 01:16:41.935033  133241 kubeadm.go:310] 
	I1210 01:16:41.935184  133241 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 01:16:41.935327  133241 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 01:16:41.935346  133241 kubeadm.go:310] 
	I1210 01:16:41.935485  133241 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 01:16:41.935600  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 01:16:41.935720  133241 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 01:16:41.935818  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 01:16:41.935828  133241 kubeadm.go:310] 
	I1210 01:16:41.936518  133241 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:16:41.936630  133241 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 01:16:41.936756  133241 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1210 01:16:41.936849  133241 kubeadm.go:394] duration metric: took 7m57.690847315s to StartCluster
	I1210 01:16:41.936924  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:16:41.936994  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:16:41.979911  133241 cri.go:89] found id: ""
	I1210 01:16:41.979944  133241 logs.go:282] 0 containers: []
	W1210 01:16:41.979955  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:16:41.979964  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:16:41.980037  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:16:42.018336  133241 cri.go:89] found id: ""
	I1210 01:16:42.018366  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.018378  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:16:42.018385  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:16:42.018461  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:16:42.050036  133241 cri.go:89] found id: ""
	I1210 01:16:42.050065  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.050074  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:16:42.050080  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:16:42.050139  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:16:42.083023  133241 cri.go:89] found id: ""
	I1210 01:16:42.083051  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.083063  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:16:42.083072  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:16:42.083131  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:16:42.117900  133241 cri.go:89] found id: ""
	I1210 01:16:42.117921  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.117930  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:16:42.117936  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:16:42.117982  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:16:42.150009  133241 cri.go:89] found id: ""
	I1210 01:16:42.150041  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.150054  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:16:42.150063  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:16:42.150116  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:16:42.182606  133241 cri.go:89] found id: ""
	I1210 01:16:42.182632  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.182643  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:16:42.182650  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:16:42.182712  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:16:42.223456  133241 cri.go:89] found id: ""
	I1210 01:16:42.223486  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.223496  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:16:42.223507  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:16:42.223522  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:16:42.287081  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:16:42.287118  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:16:42.308277  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:16:42.308315  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:16:42.401928  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:16:42.401960  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:16:42.401977  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:16:42.515786  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:16:42.515829  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 01:16:42.551865  133241 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1210 01:16:42.551924  133241 out.go:270] * 
	W1210 01:16:42.552001  133241 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 01:16:42.552019  133241 out.go:270] * 
	W1210 01:16:42.552906  133241 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 01:16:42.556458  133241 out.go:201] 
	W1210 01:16:42.557556  133241 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 01:16:42.557619  133241 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 01:16:42.557649  133241 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 01:16:42.559020  133241 out.go:201] 
	
	
	==> CRI-O <==
	Dec 10 01:22:30 embed-certs-274758 crio[718]: time="2024-12-10 01:22:30.243421076Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793750243403714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=77276910-8bca-4beb-b7d2-53042a8027d2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:22:30 embed-certs-274758 crio[718]: time="2024-12-10 01:22:30.243883544Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f439ff0e-14aa-4de0-ad29-ce0a5b3bbdd9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:22:30 embed-certs-274758 crio[718]: time="2024-12-10 01:22:30.244131588Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f439ff0e-14aa-4de0-ad29-ce0a5b3bbdd9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:22:30 embed-certs-274758 crio[718]: time="2024-12-10 01:22:30.244359413Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:539ca3cb672dc5a83e86ae5c31705ee7c3e3d1d00d7f20c3e3d09db37492e97f,PodSandboxId:41e8d06cd50568f5d4a172e0c41ef4292927a09467a284e2feb14a073a350615,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733793200028671668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e4d38f-b0fe-43cf-a844-ba787287fda6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55a7c60e436fac7a6376974b0733a09608016535ca692ffbe443e635d82d1170,PodSandboxId:df58a6a4b4b98b3d68c748f8692fd21feedd8a998cc8c452d8269fb561d49411,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793199649507753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bgjgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 277d23ef-ff20-414d-beb6-c6982712a423,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3ff20847120df12a65c14c8c95260d83dfd25ee36e9187815c6e7884387915,PodSandboxId:e7a4c1081cefac10a4214b4cc646e1891af84fcd7bf1d08728a6c8f72ef013b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793199583563499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m4qgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
1253d1b-c010-41e2-9286-e9930025e9ff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d5ee8060a1e08c1c4ba1136c0629a7997d6bd5a5df123954998284ae05253f,PodSandboxId:41e5ee0c3296f07af17418441ca07ee113fcc0cf3afd4441e81baf97c2edf92c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733793198861319737,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v28mz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd47cc1-a085-4e77-850d-dde0c8ed6054,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9ca46cabc94b73b3a51caad200e0e9799e4a3537f99e32ab2b0cd417c0ab7db,PodSandboxId:f8b897e6d6a537fa4d8ae1ed8b9dd80c864e68c576e328ea97aebb219a91a6cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733793188118660133,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f70de46867691833f10ec303c300c8f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8658835ca140bd6228a30fe8124de708f05d50d5def0b2056e923fa050514de7,PodSandboxId:a4e108c8f05dcb6039dc92adc6d26097a6f97ed0af7dcfc6ddb40d919f92adbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733793188077836579,Labels:map[string]st
ring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d1ab12683a6c965f20a7467f588bc94,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bebe7b8c93db1940dbf7a305b59b0b9c675c5d3c03a30e852340357e4835a284,PodSandboxId:1c0a6de6f1d87ea4dbb532ae95c689a60779d2c96adb0636b1e6e48bae719680,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733793188091155407,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e4ade909c7172eb501f725fba84f76e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29eefdbc8574b415ab0f94d95c3114d03615c96fef747dad1081739f33126932,PodSandboxId:767ea99b563004be5f52740ce343bfaaff2fe3467f32172416d92a5fb9212758,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733793188072241167,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b6b99b806e5357d10ab1acbd63fc7fa,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e90d02b1492860d29809638b72ff024a3ee9e206d3c17200ee51b1f3615c24,PodSandboxId:d49f09c506027d724ab68bc7880ba05cf66c71379a1bbe3fc06b669311c4fc08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733792906352704503,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f70de46867691833f10ec303c300c8f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f439ff0e-14aa-4de0-ad29-ce0a5b3bbdd9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:22:30 embed-certs-274758 crio[718]: time="2024-12-10 01:22:30.285550976Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=357d6d5d-eaae-4010-b4f7-726a56d26f10 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:22:30 embed-certs-274758 crio[718]: time="2024-12-10 01:22:30.285635848Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=357d6d5d-eaae-4010-b4f7-726a56d26f10 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:22:30 embed-certs-274758 crio[718]: time="2024-12-10 01:22:30.286833661Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e2d1a118-2428-47ae-bfbe-5630a840469e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:22:30 embed-certs-274758 crio[718]: time="2024-12-10 01:22:30.287489015Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793750287468761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e2d1a118-2428-47ae-bfbe-5630a840469e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:22:30 embed-certs-274758 crio[718]: time="2024-12-10 01:22:30.287858885Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6cc0e592-275e-4307-b607-6c9f93241848 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:22:30 embed-certs-274758 crio[718]: time="2024-12-10 01:22:30.287989124Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6cc0e592-275e-4307-b607-6c9f93241848 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:22:30 embed-certs-274758 crio[718]: time="2024-12-10 01:22:30.288171392Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:539ca3cb672dc5a83e86ae5c31705ee7c3e3d1d00d7f20c3e3d09db37492e97f,PodSandboxId:41e8d06cd50568f5d4a172e0c41ef4292927a09467a284e2feb14a073a350615,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733793200028671668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e4d38f-b0fe-43cf-a844-ba787287fda6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55a7c60e436fac7a6376974b0733a09608016535ca692ffbe443e635d82d1170,PodSandboxId:df58a6a4b4b98b3d68c748f8692fd21feedd8a998cc8c452d8269fb561d49411,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793199649507753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bgjgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 277d23ef-ff20-414d-beb6-c6982712a423,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3ff20847120df12a65c14c8c95260d83dfd25ee36e9187815c6e7884387915,PodSandboxId:e7a4c1081cefac10a4214b4cc646e1891af84fcd7bf1d08728a6c8f72ef013b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793199583563499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m4qgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
1253d1b-c010-41e2-9286-e9930025e9ff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d5ee8060a1e08c1c4ba1136c0629a7997d6bd5a5df123954998284ae05253f,PodSandboxId:41e5ee0c3296f07af17418441ca07ee113fcc0cf3afd4441e81baf97c2edf92c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733793198861319737,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v28mz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd47cc1-a085-4e77-850d-dde0c8ed6054,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9ca46cabc94b73b3a51caad200e0e9799e4a3537f99e32ab2b0cd417c0ab7db,PodSandboxId:f8b897e6d6a537fa4d8ae1ed8b9dd80c864e68c576e328ea97aebb219a91a6cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733793188118660133,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f70de46867691833f10ec303c300c8f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8658835ca140bd6228a30fe8124de708f05d50d5def0b2056e923fa050514de7,PodSandboxId:a4e108c8f05dcb6039dc92adc6d26097a6f97ed0af7dcfc6ddb40d919f92adbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733793188077836579,Labels:map[string]st
ring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d1ab12683a6c965f20a7467f588bc94,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bebe7b8c93db1940dbf7a305b59b0b9c675c5d3c03a30e852340357e4835a284,PodSandboxId:1c0a6de6f1d87ea4dbb532ae95c689a60779d2c96adb0636b1e6e48bae719680,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733793188091155407,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e4ade909c7172eb501f725fba84f76e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29eefdbc8574b415ab0f94d95c3114d03615c96fef747dad1081739f33126932,PodSandboxId:767ea99b563004be5f52740ce343bfaaff2fe3467f32172416d92a5fb9212758,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733793188072241167,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b6b99b806e5357d10ab1acbd63fc7fa,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e90d02b1492860d29809638b72ff024a3ee9e206d3c17200ee51b1f3615c24,PodSandboxId:d49f09c506027d724ab68bc7880ba05cf66c71379a1bbe3fc06b669311c4fc08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733792906352704503,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f70de46867691833f10ec303c300c8f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6cc0e592-275e-4307-b607-6c9f93241848 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:22:30 embed-certs-274758 crio[718]: time="2024-12-10 01:22:30.323087425Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aef876bf-efc5-4299-9533-0593881bd220 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:22:30 embed-certs-274758 crio[718]: time="2024-12-10 01:22:30.323155830Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aef876bf-efc5-4299-9533-0593881bd220 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:22:30 embed-certs-274758 crio[718]: time="2024-12-10 01:22:30.324260176Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=90152f2e-7f50-4452-9656-e29d80b5e4e6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:22:30 embed-certs-274758 crio[718]: time="2024-12-10 01:22:30.324648739Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793750324629262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=90152f2e-7f50-4452-9656-e29d80b5e4e6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:22:30 embed-certs-274758 crio[718]: time="2024-12-10 01:22:30.325130793Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dfc4e601-6d83-40b0-9d12-6a8a25cd0eaf name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:22:30 embed-certs-274758 crio[718]: time="2024-12-10 01:22:30.325196751Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dfc4e601-6d83-40b0-9d12-6a8a25cd0eaf name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:22:30 embed-certs-274758 crio[718]: time="2024-12-10 01:22:30.325493864Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:539ca3cb672dc5a83e86ae5c31705ee7c3e3d1d00d7f20c3e3d09db37492e97f,PodSandboxId:41e8d06cd50568f5d4a172e0c41ef4292927a09467a284e2feb14a073a350615,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733793200028671668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e4d38f-b0fe-43cf-a844-ba787287fda6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55a7c60e436fac7a6376974b0733a09608016535ca692ffbe443e635d82d1170,PodSandboxId:df58a6a4b4b98b3d68c748f8692fd21feedd8a998cc8c452d8269fb561d49411,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793199649507753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bgjgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 277d23ef-ff20-414d-beb6-c6982712a423,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3ff20847120df12a65c14c8c95260d83dfd25ee36e9187815c6e7884387915,PodSandboxId:e7a4c1081cefac10a4214b4cc646e1891af84fcd7bf1d08728a6c8f72ef013b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793199583563499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m4qgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
1253d1b-c010-41e2-9286-e9930025e9ff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d5ee8060a1e08c1c4ba1136c0629a7997d6bd5a5df123954998284ae05253f,PodSandboxId:41e5ee0c3296f07af17418441ca07ee113fcc0cf3afd4441e81baf97c2edf92c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733793198861319737,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v28mz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd47cc1-a085-4e77-850d-dde0c8ed6054,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9ca46cabc94b73b3a51caad200e0e9799e4a3537f99e32ab2b0cd417c0ab7db,PodSandboxId:f8b897e6d6a537fa4d8ae1ed8b9dd80c864e68c576e328ea97aebb219a91a6cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733793188118660133,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f70de46867691833f10ec303c300c8f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8658835ca140bd6228a30fe8124de708f05d50d5def0b2056e923fa050514de7,PodSandboxId:a4e108c8f05dcb6039dc92adc6d26097a6f97ed0af7dcfc6ddb40d919f92adbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733793188077836579,Labels:map[string]st
ring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d1ab12683a6c965f20a7467f588bc94,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bebe7b8c93db1940dbf7a305b59b0b9c675c5d3c03a30e852340357e4835a284,PodSandboxId:1c0a6de6f1d87ea4dbb532ae95c689a60779d2c96adb0636b1e6e48bae719680,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733793188091155407,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e4ade909c7172eb501f725fba84f76e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29eefdbc8574b415ab0f94d95c3114d03615c96fef747dad1081739f33126932,PodSandboxId:767ea99b563004be5f52740ce343bfaaff2fe3467f32172416d92a5fb9212758,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733793188072241167,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b6b99b806e5357d10ab1acbd63fc7fa,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e90d02b1492860d29809638b72ff024a3ee9e206d3c17200ee51b1f3615c24,PodSandboxId:d49f09c506027d724ab68bc7880ba05cf66c71379a1bbe3fc06b669311c4fc08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733792906352704503,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f70de46867691833f10ec303c300c8f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dfc4e601-6d83-40b0-9d12-6a8a25cd0eaf name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:22:30 embed-certs-274758 crio[718]: time="2024-12-10 01:22:30.356025603Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7c370140-99f5-48e6-9546-1a9515fffa64 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:22:30 embed-certs-274758 crio[718]: time="2024-12-10 01:22:30.356084766Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7c370140-99f5-48e6-9546-1a9515fffa64 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:22:30 embed-certs-274758 crio[718]: time="2024-12-10 01:22:30.357112224Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df6f6bb4-2c87-4396-a1fc-11df90280605 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:22:30 embed-certs-274758 crio[718]: time="2024-12-10 01:22:30.357774019Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793750357753206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df6f6bb4-2c87-4396-a1fc-11df90280605 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:22:30 embed-certs-274758 crio[718]: time="2024-12-10 01:22:30.358316006Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5781af02-19ea-4693-83d3-fb71825a73f5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:22:30 embed-certs-274758 crio[718]: time="2024-12-10 01:22:30.358378890Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5781af02-19ea-4693-83d3-fb71825a73f5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:22:30 embed-certs-274758 crio[718]: time="2024-12-10 01:22:30.358603513Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:539ca3cb672dc5a83e86ae5c31705ee7c3e3d1d00d7f20c3e3d09db37492e97f,PodSandboxId:41e8d06cd50568f5d4a172e0c41ef4292927a09467a284e2feb14a073a350615,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733793200028671668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e4d38f-b0fe-43cf-a844-ba787287fda6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55a7c60e436fac7a6376974b0733a09608016535ca692ffbe443e635d82d1170,PodSandboxId:df58a6a4b4b98b3d68c748f8692fd21feedd8a998cc8c452d8269fb561d49411,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793199649507753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bgjgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 277d23ef-ff20-414d-beb6-c6982712a423,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3ff20847120df12a65c14c8c95260d83dfd25ee36e9187815c6e7884387915,PodSandboxId:e7a4c1081cefac10a4214b4cc646e1891af84fcd7bf1d08728a6c8f72ef013b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793199583563499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m4qgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
1253d1b-c010-41e2-9286-e9930025e9ff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d5ee8060a1e08c1c4ba1136c0629a7997d6bd5a5df123954998284ae05253f,PodSandboxId:41e5ee0c3296f07af17418441ca07ee113fcc0cf3afd4441e81baf97c2edf92c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733793198861319737,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v28mz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd47cc1-a085-4e77-850d-dde0c8ed6054,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9ca46cabc94b73b3a51caad200e0e9799e4a3537f99e32ab2b0cd417c0ab7db,PodSandboxId:f8b897e6d6a537fa4d8ae1ed8b9dd80c864e68c576e328ea97aebb219a91a6cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733793188118660133,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f70de46867691833f10ec303c300c8f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8658835ca140bd6228a30fe8124de708f05d50d5def0b2056e923fa050514de7,PodSandboxId:a4e108c8f05dcb6039dc92adc6d26097a6f97ed0af7dcfc6ddb40d919f92adbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733793188077836579,Labels:map[string]st
ring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d1ab12683a6c965f20a7467f588bc94,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bebe7b8c93db1940dbf7a305b59b0b9c675c5d3c03a30e852340357e4835a284,PodSandboxId:1c0a6de6f1d87ea4dbb532ae95c689a60779d2c96adb0636b1e6e48bae719680,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733793188091155407,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e4ade909c7172eb501f725fba84f76e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29eefdbc8574b415ab0f94d95c3114d03615c96fef747dad1081739f33126932,PodSandboxId:767ea99b563004be5f52740ce343bfaaff2fe3467f32172416d92a5fb9212758,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733793188072241167,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b6b99b806e5357d10ab1acbd63fc7fa,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e90d02b1492860d29809638b72ff024a3ee9e206d3c17200ee51b1f3615c24,PodSandboxId:d49f09c506027d724ab68bc7880ba05cf66c71379a1bbe3fc06b669311c4fc08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733792906352704503,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f70de46867691833f10ec303c300c8f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5781af02-19ea-4693-83d3-fb71825a73f5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	539ca3cb672dc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   41e8d06cd5056       storage-provisioner
	55a7c60e436fa       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   df58a6a4b4b98       coredns-7c65d6cfc9-bgjgh
	2b3ff20847120       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   e7a4c1081cefa       coredns-7c65d6cfc9-m4qgb
	75d5ee8060a1e       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   41e5ee0c3296f       kube-proxy-v28mz
	d9ca46cabc94b       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            2                   f8b897e6d6a53       kube-apiserver-embed-certs-274758
	bebe7b8c93db1       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   2                   1c0a6de6f1d87       kube-controller-manager-embed-certs-274758
	8658835ca140b       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   a4e108c8f05dc       kube-scheduler-embed-certs-274758
	29eefdbc8574b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   767ea99b56300       etcd-embed-certs-274758
	c9e90d02b1492       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            1                   d49f09c506027       kube-apiserver-embed-certs-274758
	
	
	==> coredns [2b3ff20847120df12a65c14c8c95260d83dfd25ee36e9187815c6e7884387915] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [55a7c60e436fac7a6376974b0733a09608016535ca692ffbe443e635d82d1170] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-274758
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-274758
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=embed-certs-274758
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_10T01_13_13_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 01:13:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-274758
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 01:22:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 01:18:30 +0000   Tue, 10 Dec 2024 01:13:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 01:18:30 +0000   Tue, 10 Dec 2024 01:13:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 01:18:30 +0000   Tue, 10 Dec 2024 01:13:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 01:18:30 +0000   Tue, 10 Dec 2024 01:13:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.76
	  Hostname:    embed-certs-274758
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 56378d021cd14668a888b76f8753656d
	  System UUID:                56378d02-1cd1-4668-a888-b76f8753656d
	  Boot ID:                    c417dfc5-e023-447a-a35b-9f030b1e0e21
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-bgjgh                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 coredns-7c65d6cfc9-m4qgb                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 etcd-embed-certs-274758                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m17s
	  kube-system                 kube-apiserver-embed-certs-274758             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-controller-manager-embed-certs-274758    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-proxy-v28mz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-scheduler-embed-certs-274758             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 metrics-server-6867b74b74-mcw2c               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m11s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m11s  kube-proxy       
	  Normal  Starting                 9m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s  kubelet          Node embed-certs-274758 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s  kubelet          Node embed-certs-274758 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s  kubelet          Node embed-certs-274758 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m13s  node-controller  Node embed-certs-274758 event: Registered Node embed-certs-274758 in Controller
	
	
	==> dmesg <==
	[  +0.052068] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037237] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.773561] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.944413] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.537439] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.387611] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.066423] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074139] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.201898] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.124542] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.286624] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[  +3.900794] systemd-fstab-generator[803]: Ignoring "noauto" option for root device
	[  +2.037616] systemd-fstab-generator[925]: Ignoring "noauto" option for root device
	[  +0.058273] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.498688] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.936754] kauditd_printk_skb: 85 callbacks suppressed
	[Dec10 01:13] systemd-fstab-generator[2628]: Ignoring "noauto" option for root device
	[  +0.063011] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.993630] systemd-fstab-generator[2950]: Ignoring "noauto" option for root device
	[  +0.103391] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.259436] systemd-fstab-generator[3060]: Ignoring "noauto" option for root device
	[  +0.110117] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.824923] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [29eefdbc8574b415ab0f94d95c3114d03615c96fef747dad1081739f33126932] <==
	{"level":"info","ts":"2024-12-10T01:13:08.438237Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-10T01:13:08.438309Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.76:2380"}
	{"level":"info","ts":"2024-12-10T01:13:08.438336Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.76:2380"}
	{"level":"info","ts":"2024-12-10T01:13:08.445363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6643fb104721b396 switched to configuration voters=(7369009462639702934)"}
	{"level":"info","ts":"2024-12-10T01:13:08.445556Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"28d07f4d863f7a6f","local-member-id":"6643fb104721b396","added-peer-id":"6643fb104721b396","added-peer-peer-urls":["https://192.168.72.76:2380"]}
	{"level":"info","ts":"2024-12-10T01:13:08.577969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6643fb104721b396 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-10T01:13:08.578019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6643fb104721b396 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-10T01:13:08.578041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6643fb104721b396 received MsgPreVoteResp from 6643fb104721b396 at term 1"}
	{"level":"info","ts":"2024-12-10T01:13:08.578053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6643fb104721b396 became candidate at term 2"}
	{"level":"info","ts":"2024-12-10T01:13:08.578059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6643fb104721b396 received MsgVoteResp from 6643fb104721b396 at term 2"}
	{"level":"info","ts":"2024-12-10T01:13:08.578068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6643fb104721b396 became leader at term 2"}
	{"level":"info","ts":"2024-12-10T01:13:08.578075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6643fb104721b396 elected leader 6643fb104721b396 at term 2"}
	{"level":"info","ts":"2024-12-10T01:13:08.582138Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6643fb104721b396","local-member-attributes":"{Name:embed-certs-274758 ClientURLs:[https://192.168.72.76:2379]}","request-path":"/0/members/6643fb104721b396/attributes","cluster-id":"28d07f4d863f7a6f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-10T01:13:08.582335Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T01:13:08.582452Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-10T01:13:08.582826Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-10T01:13:08.584950Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-10T01:13:08.585067Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-10T01:13:08.586615Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-10T01:13:08.587124Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-10T01:13:08.591816Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-10T01:13:08.598038Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"28d07f4d863f7a6f","local-member-id":"6643fb104721b396","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T01:13:08.598118Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T01:13:08.598157Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T01:13:08.598502Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.76:2379"}
	
	
	==> kernel <==
	 01:22:30 up 14 min,  0 users,  load average: 0.05, 0.17, 0.17
	Linux embed-certs-274758 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c9e90d02b1492860d29809638b72ff024a3ee9e206d3c17200ee51b1f3615c24] <==
	W1210 01:13:04.138863       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.198497       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.254855       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.273349       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.298206       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.314961       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.501510       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.511178       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.540833       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.558473       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.593591       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.731336       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.779541       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.783244       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.784581       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.796123       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.828869       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.846580       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.912557       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.986773       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:05.023462       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:05.048234       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:05.054689       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:05.151796       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:05.226365       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d9ca46cabc94b73b3a51caad200e0e9799e4a3537f99e32ab2b0cd417c0ab7db] <==
	E1210 01:18:11.584406       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1210 01:18:11.584484       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 01:18:11.585684       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 01:18:11.585730       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 01:19:11.586953       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 01:19:11.587067       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1210 01:19:11.586961       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 01:19:11.587119       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1210 01:19:11.588360       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 01:19:11.588393       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 01:21:11.588709       1 handler_proxy.go:99] no RequestInfo found in the context
	W1210 01:21:11.589122       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 01:21:11.589395       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1210 01:21:11.589390       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1210 01:21:11.590687       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 01:21:11.590714       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [bebe7b8c93db1940dbf7a305b59b0b9c675c5d3c03a30e852340357e4835a284] <==
	E1210 01:17:17.574809       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:17:18.008203       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:17:47.582537       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:17:48.016799       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:18:17.588821       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:18:18.023842       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 01:18:30.392499       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-274758"
	E1210 01:18:47.595133       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:18:48.031484       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 01:19:13.256554       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="287.4µs"
	E1210 01:19:17.602403       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:19:18.039614       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 01:19:26.245497       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="93.343µs"
	E1210 01:19:47.609381       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:19:48.047271       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:20:17.616104       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:20:18.054977       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:20:47.622019       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:20:48.063707       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:21:17.629420       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:21:18.071577       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:21:47.636047       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:21:48.080368       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:22:17.642056       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:22:18.087871       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [75d5ee8060a1e08c1c4ba1136c0629a7997d6bd5a5df123954998284ae05253f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1210 01:13:19.173429       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1210 01:13:19.189652       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.76"]
	E1210 01:13:19.189736       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 01:13:19.264418       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1210 01:13:19.264472       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 01:13:19.264505       1 server_linux.go:169] "Using iptables Proxier"
	I1210 01:13:19.266809       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 01:13:19.267102       1 server.go:483] "Version info" version="v1.31.2"
	I1210 01:13:19.267137       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 01:13:19.268632       1 config.go:199] "Starting service config controller"
	I1210 01:13:19.268669       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1210 01:13:19.268703       1 config.go:105] "Starting endpoint slice config controller"
	I1210 01:13:19.268723       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1210 01:13:19.269272       1 config.go:328] "Starting node config controller"
	I1210 01:13:19.269301       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1210 01:13:19.368801       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1210 01:13:19.368861       1 shared_informer.go:320] Caches are synced for service config
	I1210 01:13:19.369666       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8658835ca140bd6228a30fe8124de708f05d50d5def0b2056e923fa050514de7] <==
	W1210 01:13:10.597122       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1210 01:13:10.597634       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:10.594650       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1210 01:13:10.597720       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:10.594686       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1210 01:13:10.597842       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:10.594724       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1210 01:13:10.597961       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:10.597194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1210 01:13:10.598028       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:10.597242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1210 01:13:10.598097       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:11.467827       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1210 01:13:11.468001       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:11.547837       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1210 01:13:11.548043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:11.598149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1210 01:13:11.598215       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:11.764402       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1210 01:13:11.764507       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:11.768152       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1210 01:13:11.768258       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:11.824273       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1210 01:13:11.824361       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1210 01:13:12.189487       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 10 01:21:21 embed-certs-274758 kubelet[2957]: E1210 01:21:21.225620    2957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mcw2c" podUID="a7b75933-124c-4577-b26a-ad1c5c128910"
	Dec 10 01:21:23 embed-certs-274758 kubelet[2957]: E1210 01:21:23.369045    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793683368717939,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:21:23 embed-certs-274758 kubelet[2957]: E1210 01:21:23.369321    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793683368717939,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:21:33 embed-certs-274758 kubelet[2957]: E1210 01:21:33.370549    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793693370278636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:21:33 embed-certs-274758 kubelet[2957]: E1210 01:21:33.370995    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793693370278636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:21:34 embed-certs-274758 kubelet[2957]: E1210 01:21:34.225005    2957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mcw2c" podUID="a7b75933-124c-4577-b26a-ad1c5c128910"
	Dec 10 01:21:43 embed-certs-274758 kubelet[2957]: E1210 01:21:43.372044    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793703371731610,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:21:43 embed-certs-274758 kubelet[2957]: E1210 01:21:43.372080    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793703371731610,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:21:47 embed-certs-274758 kubelet[2957]: E1210 01:21:47.226570    2957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mcw2c" podUID="a7b75933-124c-4577-b26a-ad1c5c128910"
	Dec 10 01:21:53 embed-certs-274758 kubelet[2957]: E1210 01:21:53.373855    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793713373544992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:21:53 embed-certs-274758 kubelet[2957]: E1210 01:21:53.374228    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793713373544992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:01 embed-certs-274758 kubelet[2957]: E1210 01:22:01.225009    2957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mcw2c" podUID="a7b75933-124c-4577-b26a-ad1c5c128910"
	Dec 10 01:22:03 embed-certs-274758 kubelet[2957]: E1210 01:22:03.376294    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793723376006632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:03 embed-certs-274758 kubelet[2957]: E1210 01:22:03.376677    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793723376006632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:13 embed-certs-274758 kubelet[2957]: E1210 01:22:13.240664    2957 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 10 01:22:13 embed-certs-274758 kubelet[2957]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 10 01:22:13 embed-certs-274758 kubelet[2957]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 10 01:22:13 embed-certs-274758 kubelet[2957]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 10 01:22:13 embed-certs-274758 kubelet[2957]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 10 01:22:13 embed-certs-274758 kubelet[2957]: E1210 01:22:13.378565    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793733378194213,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:13 embed-certs-274758 kubelet[2957]: E1210 01:22:13.378684    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793733378194213,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:15 embed-certs-274758 kubelet[2957]: E1210 01:22:15.225366    2957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mcw2c" podUID="a7b75933-124c-4577-b26a-ad1c5c128910"
	Dec 10 01:22:23 embed-certs-274758 kubelet[2957]: E1210 01:22:23.380849    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793743380492616,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:23 embed-certs-274758 kubelet[2957]: E1210 01:22:23.380957    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793743380492616,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:27 embed-certs-274758 kubelet[2957]: E1210 01:22:27.226989    2957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mcw2c" podUID="a7b75933-124c-4577-b26a-ad1c5c128910"
	
	
	==> storage-provisioner [539ca3cb672dc5a83e86ae5c31705ee7c3e3d1d00d7f20c3e3d09db37492e97f] <==
	I1210 01:13:20.120449       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 01:13:20.131144       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 01:13:20.131205       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1210 01:13:20.144731       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 01:13:20.145084       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-274758_2172c1a7-4a8d-4542-b234-bcd085cfb142!
	I1210 01:13:20.146637       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46948010-873c-4fb9-bdc6-c2b19cb378d9", APIVersion:"v1", ResourceVersion:"390", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-274758_2172c1a7-4a8d-4542-b234-bcd085cfb142 became leader
	I1210 01:13:20.245322       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-274758_2172c1a7-4a8d-4542-b234-bcd085cfb142!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-274758 -n embed-certs-274758
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-274758 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-mcw2c
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-274758 describe pod metrics-server-6867b74b74-mcw2c
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-274758 describe pod metrics-server-6867b74b74-mcw2c: exit status 1 (62.765905ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-mcw2c" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-274758 describe pod metrics-server-6867b74b74-mcw2c: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-584179 -n no-preload-584179
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-10 01:23:04.072002183 +0000 UTC m=+5977.151640333
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-584179 -n no-preload-584179
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-584179 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-584179 logs -n 25: (1.930172409s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-094470                              | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-options-086522                                 | cert-options-086522          | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 00:58 UTC |
	| start   | -p no-preload-584179                                   | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 01:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-481624                           | kubernetes-upgrade-481624    | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 00:58 UTC |
	| start   | -p embed-certs-274758                                  | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 01:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-290541                              | cert-expiration-290541       | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-290541                              | cert-expiration-290541       | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-371895 | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | disable-driver-mounts-371895                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:02 UTC |
	|         | default-k8s-diff-port-901295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-584179             | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-584179                                   | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-274758            | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-274758                                  | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-901295  | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:02 UTC | 10 Dec 24 01:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:02 UTC |                     |
	|         | default-k8s-diff-port-901295                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-094470        | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-584179                  | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-274758                 | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-584179                                   | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC | 10 Dec 24 01:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-274758                                  | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC | 10 Dec 24 01:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-901295       | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-094470                              | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC | 10 Dec 24 01:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-094470             | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC | 10 Dec 24 01:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-094470                              | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC | 10 Dec 24 01:14 UTC |
	|         | default-k8s-diff-port-901295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/10 01:04:42
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 01:04:42.604554  133282 out.go:345] Setting OutFile to fd 1 ...
	I1210 01:04:42.604645  133282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:04:42.604652  133282 out.go:358] Setting ErrFile to fd 2...
	I1210 01:04:42.604657  133282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:04:42.604818  133282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 01:04:42.605325  133282 out.go:352] Setting JSON to false
	I1210 01:04:42.606230  133282 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10034,"bootTime":1733782649,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 01:04:42.606360  133282 start.go:139] virtualization: kvm guest
	I1210 01:04:42.608505  133282 out.go:177] * [default-k8s-diff-port-901295] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 01:04:42.609651  133282 notify.go:220] Checking for updates...
	I1210 01:04:42.609661  133282 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 01:04:42.610866  133282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 01:04:42.611986  133282 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:04:42.613055  133282 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 01:04:42.614094  133282 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 01:04:42.615160  133282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 01:04:42.616546  133282 config.go:182] Loaded profile config "default-k8s-diff-port-901295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:04:42.616942  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:04:42.617000  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:04:42.631861  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39355
	I1210 01:04:42.632399  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:04:42.632966  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:04:42.632988  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:04:42.633389  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:04:42.633558  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:04:42.633822  133282 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 01:04:42.634105  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:04:42.634139  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:04:42.648371  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38389
	I1210 01:04:42.648775  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:04:42.649217  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:04:42.649238  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:04:42.649580  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:04:42.649752  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:04:42.680926  133282 out.go:177] * Using the kvm2 driver based on existing profile
	I1210 01:04:42.682339  133282 start.go:297] selected driver: kvm2
	I1210 01:04:42.682365  133282 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:04:42.682487  133282 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 01:04:42.683148  133282 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 01:04:42.683220  133282 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 01:04:42.697586  133282 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1210 01:04:42.697938  133282 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:04:42.697970  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:04:42.698011  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:04:42.698042  133282 start.go:340] cluster config:
	{Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:04:42.698139  133282 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 01:04:42.699685  133282 out.go:177] * Starting "default-k8s-diff-port-901295" primary control-plane node in "default-k8s-diff-port-901295" cluster
	I1210 01:04:39.721352  133241 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1210 01:04:39.721383  133241 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1210 01:04:39.721392  133241 cache.go:56] Caching tarball of preloaded images
	I1210 01:04:39.721455  133241 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 01:04:39.721464  133241 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1210 01:04:39.721545  133241 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/config.json ...
	I1210 01:04:39.721707  133241 start.go:360] acquireMachinesLock for old-k8s-version-094470: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 01:04:44.574793  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:04:42.700760  133282 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:04:42.700790  133282 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1210 01:04:42.700799  133282 cache.go:56] Caching tarball of preloaded images
	I1210 01:04:42.700867  133282 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 01:04:42.700878  133282 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 01:04:42.700976  133282 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/config.json ...
	I1210 01:04:42.701136  133282 start.go:360] acquireMachinesLock for default-k8s-diff-port-901295: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 01:04:50.654849  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:04:53.726828  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:04:59.806818  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:02.878819  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:08.958855  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:12.030796  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:18.110838  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:21.182849  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:27.262801  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:30.334793  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:36.414830  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:39.486794  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:45.566825  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:48.639043  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:54.718789  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:57.790796  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:03.870824  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:06.942805  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:13.023037  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:16.094961  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:22.174798  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:25.246892  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:31.326818  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:34.398846  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:40.478809  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:43.550800  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:49.630777  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:52.702808  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:58.783007  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:01.854776  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:07.934835  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:11.006837  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:17.086805  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:20.158819  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:26.238836  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:29.311060  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:35.390827  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:38.462976  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:44.542806  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:47.614800  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:53.694819  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:56.766790  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:59.770632  132693 start.go:364] duration metric: took 4m32.843409632s to acquireMachinesLock for "embed-certs-274758"
	I1210 01:07:59.770698  132693 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:07:59.770705  132693 fix.go:54] fixHost starting: 
	I1210 01:07:59.771174  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:07:59.771226  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:07:59.787289  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45723
	I1210 01:07:59.787787  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:07:59.788234  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:07:59.788258  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:07:59.788645  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:07:59.788824  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:07:59.788958  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:07:59.790595  132693 fix.go:112] recreateIfNeeded on embed-certs-274758: state=Stopped err=<nil>
	I1210 01:07:59.790631  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	W1210 01:07:59.790790  132693 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:07:59.792515  132693 out.go:177] * Restarting existing kvm2 VM for "embed-certs-274758" ...
	I1210 01:07:59.793607  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Start
	I1210 01:07:59.793771  132693 main.go:141] libmachine: (embed-certs-274758) Ensuring networks are active...
	I1210 01:07:59.794532  132693 main.go:141] libmachine: (embed-certs-274758) Ensuring network default is active
	I1210 01:07:59.794864  132693 main.go:141] libmachine: (embed-certs-274758) Ensuring network mk-embed-certs-274758 is active
	I1210 01:07:59.795317  132693 main.go:141] libmachine: (embed-certs-274758) Getting domain xml...
	I1210 01:07:59.796099  132693 main.go:141] libmachine: (embed-certs-274758) Creating domain...
	I1210 01:08:00.982632  132693 main.go:141] libmachine: (embed-certs-274758) Waiting to get IP...
	I1210 01:08:00.983591  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:00.984037  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:00.984077  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:00.984002  133990 retry.go:31] will retry after 285.753383ms: waiting for machine to come up
	I1210 01:08:01.272035  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:01.272490  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:01.272514  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:01.272423  133990 retry.go:31] will retry after 309.245833ms: waiting for machine to come up
	I1210 01:08:01.582873  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:01.583336  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:01.583382  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:01.583288  133990 retry.go:31] will retry after 451.016986ms: waiting for machine to come up
	I1210 01:07:59.768336  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:07:59.768370  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:07:59.768666  132605 buildroot.go:166] provisioning hostname "no-preload-584179"
	I1210 01:07:59.768702  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:07:59.768894  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:07:59.770491  132605 machine.go:96] duration metric: took 4m37.429107505s to provisionDockerMachine
	I1210 01:07:59.770535  132605 fix.go:56] duration metric: took 4m37.448303416s for fixHost
	I1210 01:07:59.770542  132605 start.go:83] releasing machines lock for "no-preload-584179", held for 4m37.448340626s
	W1210 01:07:59.770589  132605 start.go:714] error starting host: provision: host is not running
	W1210 01:07:59.770743  132605 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1210 01:07:59.770759  132605 start.go:729] Will try again in 5 seconds ...
	I1210 01:08:02.035970  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:02.036421  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:02.036443  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:02.036382  133990 retry.go:31] will retry after 408.436756ms: waiting for machine to come up
	I1210 01:08:02.445970  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:02.446515  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:02.446550  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:02.446445  133990 retry.go:31] will retry after 612.819219ms: waiting for machine to come up
	I1210 01:08:03.061377  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:03.061850  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:03.061879  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:03.061795  133990 retry.go:31] will retry after 867.345457ms: waiting for machine to come up
	I1210 01:08:03.930866  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:03.931316  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:03.931340  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:03.931259  133990 retry.go:31] will retry after 758.429736ms: waiting for machine to come up
	I1210 01:08:04.691061  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:04.691480  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:04.691511  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:04.691430  133990 retry.go:31] will retry after 1.278419765s: waiting for machine to come up
	I1210 01:08:05.972206  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:05.972645  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:05.972677  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:05.972596  133990 retry.go:31] will retry after 1.726404508s: waiting for machine to come up
	I1210 01:08:04.770968  132605 start.go:360] acquireMachinesLock for no-preload-584179: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 01:08:07.700170  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:07.700593  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:07.700615  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:07.700544  133990 retry.go:31] will retry after 2.286681333s: waiting for machine to come up
	I1210 01:08:09.989072  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:09.989424  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:09.989447  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:09.989383  133990 retry.go:31] will retry after 2.723565477s: waiting for machine to come up
	I1210 01:08:12.716204  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:12.716656  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:12.716680  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:12.716618  133990 retry.go:31] will retry after 3.619683155s: waiting for machine to come up
	I1210 01:08:16.338854  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.339271  132693 main.go:141] libmachine: (embed-certs-274758) Found IP for machine: 192.168.72.76
	I1210 01:08:16.339301  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has current primary IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.339306  132693 main.go:141] libmachine: (embed-certs-274758) Reserving static IP address...
	I1210 01:08:16.339656  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "embed-certs-274758", mac: "52:54:00:d3:3c:b1", ip: "192.168.72.76"} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.339683  132693 main.go:141] libmachine: (embed-certs-274758) DBG | skip adding static IP to network mk-embed-certs-274758 - found existing host DHCP lease matching {name: "embed-certs-274758", mac: "52:54:00:d3:3c:b1", ip: "192.168.72.76"}
	I1210 01:08:16.339695  132693 main.go:141] libmachine: (embed-certs-274758) Reserved static IP address: 192.168.72.76
	I1210 01:08:16.339703  132693 main.go:141] libmachine: (embed-certs-274758) Waiting for SSH to be available...
	I1210 01:08:16.339715  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Getting to WaitForSSH function...
	I1210 01:08:16.341531  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.341776  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.341804  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.341963  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Using SSH client type: external
	I1210 01:08:16.341995  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa (-rw-------)
	I1210 01:08:16.342030  132693 main.go:141] libmachine: (embed-certs-274758) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.76 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:08:16.342047  132693 main.go:141] libmachine: (embed-certs-274758) DBG | About to run SSH command:
	I1210 01:08:16.342061  132693 main.go:141] libmachine: (embed-certs-274758) DBG | exit 0
	I1210 01:08:16.465930  132693 main.go:141] libmachine: (embed-certs-274758) DBG | SSH cmd err, output: <nil>: 
	I1210 01:08:16.466310  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetConfigRaw
	I1210 01:08:16.466921  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:16.469152  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.469472  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.469501  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.469754  132693 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/config.json ...
	I1210 01:08:16.469962  132693 machine.go:93] provisionDockerMachine start ...
	I1210 01:08:16.469982  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:16.470197  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.472368  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.472740  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.472765  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.472888  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.473052  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.473222  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.473325  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.473500  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:16.473737  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:16.473752  132693 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:08:16.581932  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:08:16.581963  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetMachineName
	I1210 01:08:16.582183  132693 buildroot.go:166] provisioning hostname "embed-certs-274758"
	I1210 01:08:16.582213  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetMachineName
	I1210 01:08:16.582412  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.584799  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.585092  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.585124  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.585264  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.585415  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.585568  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.585701  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.585836  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:16.586010  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:16.586026  132693 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-274758 && echo "embed-certs-274758" | sudo tee /etc/hostname
	I1210 01:08:16.707226  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-274758
	
	I1210 01:08:16.707260  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.709905  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.710192  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.710223  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.710428  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.710632  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.710828  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.710957  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.711127  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:16.711339  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:16.711356  132693 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-274758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-274758/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-274758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:08:17.578801  133241 start.go:364] duration metric: took 3m37.857041189s to acquireMachinesLock for "old-k8s-version-094470"
	I1210 01:08:17.578868  133241 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:08:17.578876  133241 fix.go:54] fixHost starting: 
	I1210 01:08:17.579295  133241 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:08:17.579353  133241 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:08:17.595770  133241 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46811
	I1210 01:08:17.596141  133241 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:08:17.596669  133241 main.go:141] libmachine: Using API Version  1
	I1210 01:08:17.596693  133241 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:08:17.597084  133241 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:08:17.597263  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:17.597405  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetState
	I1210 01:08:17.598931  133241 fix.go:112] recreateIfNeeded on old-k8s-version-094470: state=Stopped err=<nil>
	I1210 01:08:17.598957  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	W1210 01:08:17.599124  133241 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:08:17.600962  133241 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-094470" ...
	I1210 01:08:16.831001  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:08:16.831032  132693 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:08:16.831063  132693 buildroot.go:174] setting up certificates
	I1210 01:08:16.831074  132693 provision.go:84] configureAuth start
	I1210 01:08:16.831084  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetMachineName
	I1210 01:08:16.831362  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:16.833916  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.834282  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.834318  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.834446  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.836770  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.837061  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.837083  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.837216  132693 provision.go:143] copyHostCerts
	I1210 01:08:16.837284  132693 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:08:16.837303  132693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:08:16.837357  132693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:08:16.837447  132693 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:08:16.837455  132693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:08:16.837478  132693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:08:16.837528  132693 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:08:16.837535  132693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:08:16.837554  132693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:08:16.837609  132693 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.embed-certs-274758 san=[127.0.0.1 192.168.72.76 embed-certs-274758 localhost minikube]
	I1210 01:08:16.953590  132693 provision.go:177] copyRemoteCerts
	I1210 01:08:16.953649  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:08:16.953676  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.956012  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.956347  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.956384  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.956544  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.956703  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.956828  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.956951  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.039674  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:08:17.061125  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1210 01:08:17.082062  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 01:08:17.102519  132693 provision.go:87] duration metric: took 271.416512ms to configureAuth
	I1210 01:08:17.102554  132693 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:08:17.102745  132693 config.go:182] Loaded profile config "embed-certs-274758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:08:17.102858  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.105469  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.105818  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.105849  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.105976  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.106169  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.106326  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.106468  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.106639  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:17.106804  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:17.106817  132693 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:08:17.339841  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:08:17.339873  132693 machine.go:96] duration metric: took 869.895063ms to provisionDockerMachine
	I1210 01:08:17.339888  132693 start.go:293] postStartSetup for "embed-certs-274758" (driver="kvm2")
	I1210 01:08:17.339902  132693 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:08:17.339921  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.340256  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:08:17.340295  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.342633  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.342947  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.342973  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.343127  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.343294  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.343441  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.343545  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.428245  132693 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:08:17.432486  132693 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:08:17.432507  132693 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:08:17.432568  132693 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:08:17.432650  132693 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:08:17.432756  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:08:17.441892  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:17.464515  132693 start.go:296] duration metric: took 124.610801ms for postStartSetup
	I1210 01:08:17.464558  132693 fix.go:56] duration metric: took 17.693851707s for fixHost
	I1210 01:08:17.464592  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.467173  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.467470  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.467494  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.467622  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.467829  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.467976  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.468111  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.468253  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:17.468418  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:17.468429  132693 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:08:17.578630  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792897.551711245
	
	I1210 01:08:17.578653  132693 fix.go:216] guest clock: 1733792897.551711245
	I1210 01:08:17.578662  132693 fix.go:229] Guest: 2024-12-10 01:08:17.551711245 +0000 UTC Remote: 2024-12-10 01:08:17.464575547 +0000 UTC m=+290.672639525 (delta=87.135698ms)
	I1210 01:08:17.578690  132693 fix.go:200] guest clock delta is within tolerance: 87.135698ms
	I1210 01:08:17.578697  132693 start.go:83] releasing machines lock for "embed-certs-274758", held for 17.808018239s
	I1210 01:08:17.578727  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.578978  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:17.581740  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.582079  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.582105  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.582272  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.582792  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.582970  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.583053  132693 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:08:17.583108  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.583173  132693 ssh_runner.go:195] Run: cat /version.json
	I1210 01:08:17.583203  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.585727  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586056  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586096  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.586121  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586268  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.586447  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.586496  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.586525  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586661  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.586665  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.586853  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.586851  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.587016  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.587145  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.689525  132693 ssh_runner.go:195] Run: systemctl --version
	I1210 01:08:17.696586  132693 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:08:17.838483  132693 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:08:17.844291  132693 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:08:17.844381  132693 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:08:17.858838  132693 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:08:17.858864  132693 start.go:495] detecting cgroup driver to use...
	I1210 01:08:17.858926  132693 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:08:17.875144  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:08:17.887694  132693 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:08:17.887750  132693 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:08:17.900263  132693 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:08:17.916462  132693 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:08:18.050837  132693 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:08:18.237065  132693 docker.go:233] disabling docker service ...
	I1210 01:08:18.237134  132693 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:08:18.254596  132693 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:08:18.267028  132693 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:08:18.384379  132693 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:08:18.511930  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:08:18.525729  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:08:18.544642  132693 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 01:08:18.544693  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.555569  132693 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:08:18.555629  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.565952  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.575954  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.589571  132693 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:08:18.604400  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.615079  132693 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.631811  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.641877  132693 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:08:18.651229  132693 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:08:18.651284  132693 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:08:18.663922  132693 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:08:18.673755  132693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:18.804115  132693 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:08:18.902371  132693 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:08:18.902453  132693 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:08:18.906806  132693 start.go:563] Will wait 60s for crictl version
	I1210 01:08:18.906876  132693 ssh_runner.go:195] Run: which crictl
	I1210 01:08:18.910409  132693 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:08:18.957196  132693 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:08:18.957293  132693 ssh_runner.go:195] Run: crio --version
	I1210 01:08:18.983326  132693 ssh_runner.go:195] Run: crio --version
	I1210 01:08:19.021374  132693 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 01:08:17.602512  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .Start
	I1210 01:08:17.602729  133241 main.go:141] libmachine: (old-k8s-version-094470) Ensuring networks are active...
	I1210 01:08:17.603418  133241 main.go:141] libmachine: (old-k8s-version-094470) Ensuring network default is active
	I1210 01:08:17.603788  133241 main.go:141] libmachine: (old-k8s-version-094470) Ensuring network mk-old-k8s-version-094470 is active
	I1210 01:08:17.604284  133241 main.go:141] libmachine: (old-k8s-version-094470) Getting domain xml...
	I1210 01:08:17.605020  133241 main.go:141] libmachine: (old-k8s-version-094470) Creating domain...
	I1210 01:08:18.869767  133241 main.go:141] libmachine: (old-k8s-version-094470) Waiting to get IP...
	I1210 01:08:18.870786  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:18.871226  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:18.871282  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:18.871190  134112 retry.go:31] will retry after 260.195661ms: waiting for machine to come up
	I1210 01:08:19.132624  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:19.133091  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:19.133113  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:19.133034  134112 retry.go:31] will retry after 241.852579ms: waiting for machine to come up
	I1210 01:08:19.376814  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:19.377485  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:19.377520  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:19.377420  134112 retry.go:31] will retry after 410.574957ms: waiting for machine to come up
	I1210 01:08:19.023096  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:19.026231  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:19.026697  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:19.026740  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:19.026981  132693 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1210 01:08:19.031042  132693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:19.043510  132693 kubeadm.go:883] updating cluster {Name:embed-certs-274758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-274758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:08:19.043679  132693 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:08:19.043747  132693 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:19.075804  132693 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 01:08:19.075875  132693 ssh_runner.go:195] Run: which lz4
	I1210 01:08:19.079498  132693 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 01:08:19.083365  132693 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 01:08:19.083394  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1210 01:08:20.282126  132693 crio.go:462] duration metric: took 1.202670831s to copy over tarball
	I1210 01:08:20.282224  132693 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 01:08:19.790282  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:19.790868  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:19.790898  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:19.790828  134112 retry.go:31] will retry after 535.183165ms: waiting for machine to come up
	I1210 01:08:20.327434  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:20.327936  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:20.327972  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:20.327862  134112 retry.go:31] will retry after 729.193633ms: waiting for machine to come up
	I1210 01:08:21.058815  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:21.059274  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:21.059302  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:21.059224  134112 retry.go:31] will retry after 578.788415ms: waiting for machine to come up
	I1210 01:08:21.640036  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:21.640572  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:21.640604  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:21.640523  134112 retry.go:31] will retry after 1.113559472s: waiting for machine to come up
	I1210 01:08:22.755259  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:22.755716  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:22.755741  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:22.755681  134112 retry.go:31] will retry after 940.416935ms: waiting for machine to come up
	I1210 01:08:23.698216  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:23.698652  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:23.698684  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:23.698608  134112 retry.go:31] will retry after 1.575038679s: waiting for machine to come up
	I1210 01:08:22.359701  132693 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.077440918s)
	I1210 01:08:22.359757  132693 crio.go:469] duration metric: took 2.077602088s to extract the tarball
	I1210 01:08:22.359770  132693 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 01:08:22.404915  132693 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:22.444497  132693 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 01:08:22.444531  132693 cache_images.go:84] Images are preloaded, skipping loading
	I1210 01:08:22.444543  132693 kubeadm.go:934] updating node { 192.168.72.76 8443 v1.31.2 crio true true} ...
	I1210 01:08:22.444702  132693 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-274758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-274758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:08:22.444801  132693 ssh_runner.go:195] Run: crio config
	I1210 01:08:22.484278  132693 cni.go:84] Creating CNI manager for ""
	I1210 01:08:22.484301  132693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:08:22.484311  132693 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:08:22.484345  132693 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.76 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-274758 NodeName:embed-certs-274758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.76"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.76 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 01:08:22.484508  132693 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.76
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-274758"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.76"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.76"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:08:22.484573  132693 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 01:08:22.493746  132693 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:08:22.493827  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:08:22.503898  132693 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1210 01:08:22.520349  132693 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:08:22.536653  132693 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1210 01:08:22.553389  132693 ssh_runner.go:195] Run: grep 192.168.72.76	control-plane.minikube.internal$ /etc/hosts
	I1210 01:08:22.556933  132693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.76	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:22.569060  132693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:22.709124  132693 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:08:22.728316  132693 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758 for IP: 192.168.72.76
	I1210 01:08:22.728342  132693 certs.go:194] generating shared ca certs ...
	I1210 01:08:22.728382  132693 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:08:22.728564  132693 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:08:22.728619  132693 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:08:22.728633  132693 certs.go:256] generating profile certs ...
	I1210 01:08:22.728764  132693 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/client.key
	I1210 01:08:22.728852  132693 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/apiserver.key.ec69c041
	I1210 01:08:22.728906  132693 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/proxy-client.key
	I1210 01:08:22.729067  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:08:22.729121  132693 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:08:22.729144  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:08:22.729186  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:08:22.729223  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:08:22.729254  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:08:22.729313  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:22.730259  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:08:22.786992  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:08:22.813486  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:08:22.840236  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:08:22.870078  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1210 01:08:22.896484  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 01:08:22.917547  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:08:22.940550  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 01:08:22.964784  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:08:22.987389  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:08:23.009860  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:08:23.032300  132693 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:08:23.048611  132693 ssh_runner.go:195] Run: openssl version
	I1210 01:08:23.053927  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:08:23.064731  132693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:08:23.068872  132693 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:08:23.068917  132693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:08:23.074207  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:08:23.085278  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:08:23.096087  132693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:23.100106  132693 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:23.100155  132693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:23.105408  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:08:23.114862  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:08:23.124112  132693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:08:23.127915  132693 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:08:23.127958  132693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:08:23.132972  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:08:23.142672  132693 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:08:23.146554  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:08:23.152071  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:08:23.157606  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:08:23.162974  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:08:23.168059  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:08:23.173354  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:08:23.178612  132693 kubeadm.go:392] StartCluster: {Name:embed-certs-274758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-274758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:08:23.178733  132693 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:08:23.178788  132693 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:23.214478  132693 cri.go:89] found id: ""
	I1210 01:08:23.214545  132693 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:08:23.223871  132693 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:08:23.223897  132693 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:08:23.223956  132693 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:08:23.232839  132693 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:08:23.233836  132693 kubeconfig.go:125] found "embed-certs-274758" server: "https://192.168.72.76:8443"
	I1210 01:08:23.235958  132693 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:08:23.244484  132693 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.76
	I1210 01:08:23.244517  132693 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:08:23.244529  132693 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:08:23.244578  132693 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:23.282997  132693 cri.go:89] found id: ""
	I1210 01:08:23.283063  132693 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:08:23.298971  132693 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:08:23.307664  132693 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:08:23.307690  132693 kubeadm.go:157] found existing configuration files:
	
	I1210 01:08:23.307739  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:08:23.316208  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:08:23.316259  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:08:23.324410  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:08:23.332254  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:08:23.332303  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:08:23.340482  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:08:23.348584  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:08:23.348636  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:08:23.356760  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:08:23.364508  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:08:23.364564  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:08:23.372644  132693 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:08:23.380791  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:23.481384  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.558104  132693 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.076675674s)
	I1210 01:08:24.558155  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.743002  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.812833  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.910903  132693 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:08:24.911007  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:25.411815  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:25.911457  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:26.411340  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:25.276751  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:25.277027  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:25.277058  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:25.276996  134112 retry.go:31] will retry after 1.531276871s: waiting for machine to come up
	I1210 01:08:26.809860  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:26.810332  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:26.810365  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:26.810270  134112 retry.go:31] will retry after 2.029725217s: waiting for machine to come up
	I1210 01:08:28.842419  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:28.842945  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:28.842979  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:28.842895  134112 retry.go:31] will retry after 2.777752063s: waiting for machine to come up
	I1210 01:08:26.911681  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:26.925244  132693 api_server.go:72] duration metric: took 2.014341005s to wait for apiserver process to appear ...
	I1210 01:08:26.925276  132693 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:08:26.925307  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:29.461167  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:08:29.461199  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:08:29.461221  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:29.490907  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:08:29.490935  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:08:29.925947  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:29.938161  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:08:29.938197  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:08:30.425822  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:30.448700  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:08:30.448741  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:08:30.926368  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:30.930770  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 200:
	ok
	I1210 01:08:30.936664  132693 api_server.go:141] control plane version: v1.31.2
	I1210 01:08:30.936706  132693 api_server.go:131] duration metric: took 4.011421056s to wait for apiserver health ...
	I1210 01:08:30.936719  132693 cni.go:84] Creating CNI manager for ""
	I1210 01:08:30.936731  132693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:08:30.938509  132693 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:08:30.939651  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:08:30.949390  132693 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:08:30.973739  132693 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:08:30.988397  132693 system_pods.go:59] 8 kube-system pods found
	I1210 01:08:30.988441  132693 system_pods.go:61] "coredns-7c65d6cfc9-g98k2" [4358eb5a-fa28-405d-b6a4-66d232c1b060] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 01:08:30.988451  132693 system_pods.go:61] "etcd-embed-certs-274758" [11343776-d268-428f-9af8-4d20e4c1dda4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 01:08:30.988461  132693 system_pods.go:61] "kube-apiserver-embed-certs-274758" [c60d7a8e-e029-47ec-8f9d-5531aaeeb595] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 01:08:30.988471  132693 system_pods.go:61] "kube-controller-manager-embed-certs-274758" [53c0e257-c3c1-410b-8ce5-8350530160c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 01:08:30.988478  132693 system_pods.go:61] "kube-proxy-d29zg" [cbf2dba9-1c85-4e21-bf0b-01cf3fcd00df] Running
	I1210 01:08:30.988503  132693 system_pods.go:61] "kube-scheduler-embed-certs-274758" [6ecaa7c9-f7b6-450d-941c-8ccf582af275] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 01:08:30.988516  132693 system_pods.go:61] "metrics-server-6867b74b74-mhxtf" [2874a85a-c957-4056-b60e-be170f3c1ab2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:08:30.988527  132693 system_pods.go:61] "storage-provisioner" [7e2b93e2-0f25-4bb1-bca6-02a8ea5336ed] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 01:08:30.988539  132693 system_pods.go:74] duration metric: took 14.779044ms to wait for pod list to return data ...
	I1210 01:08:30.988567  132693 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:08:30.993600  132693 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:08:30.993632  132693 node_conditions.go:123] node cpu capacity is 2
	I1210 01:08:30.993652  132693 node_conditions.go:105] duration metric: took 5.074866ms to run NodePressure ...
	I1210 01:08:30.993680  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:31.251140  132693 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 01:08:31.254339  132693 kubeadm.go:739] kubelet initialised
	I1210 01:08:31.254358  132693 kubeadm.go:740] duration metric: took 3.193934ms waiting for restarted kubelet to initialise ...
	I1210 01:08:31.254367  132693 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:08:31.259628  132693 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.264379  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.264406  132693 pod_ready.go:82] duration metric: took 4.746678ms for pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.264417  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.264434  132693 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.268773  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "etcd-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.268794  132693 pod_ready.go:82] duration metric: took 4.345772ms for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.268804  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "etcd-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.268812  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.272890  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.272911  132693 pod_ready.go:82] duration metric: took 4.087379ms for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.272921  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.272929  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.377990  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.378020  132693 pod_ready.go:82] duration metric: took 105.077792ms for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.378033  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.378041  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-d29zg" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.777563  132693 pod_ready.go:93] pod "kube-proxy-d29zg" in "kube-system" namespace has status "Ready":"True"
	I1210 01:08:31.777584  132693 pod_ready.go:82] duration metric: took 399.533068ms for pod "kube-proxy-d29zg" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.777598  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.623742  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:31.624253  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:31.624289  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:31.624189  134112 retry.go:31] will retry after 3.852910592s: waiting for machine to come up
	I1210 01:08:36.766538  133282 start.go:364] duration metric: took 3m54.06534367s to acquireMachinesLock for "default-k8s-diff-port-901295"
	I1210 01:08:36.766623  133282 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:08:36.766636  133282 fix.go:54] fixHost starting: 
	I1210 01:08:36.767069  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:08:36.767139  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:08:36.785475  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41443
	I1210 01:08:36.786023  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:08:36.786614  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:08:36.786640  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:08:36.786956  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:08:36.787147  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:36.787295  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:08:36.788719  133282 fix.go:112] recreateIfNeeded on default-k8s-diff-port-901295: state=Stopped err=<nil>
	I1210 01:08:36.788745  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	W1210 01:08:36.788889  133282 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:08:36.791479  133282 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-901295" ...
	I1210 01:08:33.784092  132693 pod_ready.go:103] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:35.784732  132693 pod_ready.go:103] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:36.792712  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Start
	I1210 01:08:36.792883  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Ensuring networks are active...
	I1210 01:08:36.793559  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Ensuring network default is active
	I1210 01:08:36.793891  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Ensuring network mk-default-k8s-diff-port-901295 is active
	I1210 01:08:36.794354  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Getting domain xml...
	I1210 01:08:36.795038  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Creating domain...
	I1210 01:08:35.480373  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.480901  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has current primary IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.480926  133241 main.go:141] libmachine: (old-k8s-version-094470) Found IP for machine: 192.168.61.11
	I1210 01:08:35.480955  133241 main.go:141] libmachine: (old-k8s-version-094470) Reserving static IP address...
	I1210 01:08:35.481323  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "old-k8s-version-094470", mac: "52:54:00:00:f3:52", ip: "192.168.61.11"} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.481352  133241 main.go:141] libmachine: (old-k8s-version-094470) Reserved static IP address: 192.168.61.11
	I1210 01:08:35.481370  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | skip adding static IP to network mk-old-k8s-version-094470 - found existing host DHCP lease matching {name: "old-k8s-version-094470", mac: "52:54:00:00:f3:52", ip: "192.168.61.11"}
	I1210 01:08:35.481392  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | Getting to WaitForSSH function...
	I1210 01:08:35.481408  133241 main.go:141] libmachine: (old-k8s-version-094470) Waiting for SSH to be available...
	I1210 01:08:35.483785  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.484269  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.484314  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.484458  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | Using SSH client type: external
	I1210 01:08:35.484493  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa (-rw-------)
	I1210 01:08:35.484526  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:08:35.484548  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | About to run SSH command:
	I1210 01:08:35.484557  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | exit 0
	I1210 01:08:35.610216  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | SSH cmd err, output: <nil>: 
	I1210 01:08:35.610554  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetConfigRaw
	I1210 01:08:35.611179  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:35.613811  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.614184  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.614221  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.614448  133241 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/config.json ...
	I1210 01:08:35.614659  133241 machine.go:93] provisionDockerMachine start ...
	I1210 01:08:35.614681  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:35.614861  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.616965  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.617478  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.617507  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.617606  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:35.617741  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.617880  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.617993  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:35.618166  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:35.618416  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:35.618431  133241 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:08:35.730293  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:08:35.730326  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 01:08:35.730614  133241 buildroot.go:166] provisioning hostname "old-k8s-version-094470"
	I1210 01:08:35.730647  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 01:08:35.730902  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.733604  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.733943  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.733963  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.734110  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:35.734290  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.734436  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.734589  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:35.734737  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:35.734921  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:35.734937  133241 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-094470 && echo "old-k8s-version-094470" | sudo tee /etc/hostname
	I1210 01:08:35.856219  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-094470
	
	I1210 01:08:35.856272  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.859777  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.860157  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.860194  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.860364  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:35.860590  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.860808  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.860948  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:35.861145  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:35.861370  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:35.861391  133241 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-094470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-094470/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-094470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:08:35.984487  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:08:35.984523  133241 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:08:35.984571  133241 buildroot.go:174] setting up certificates
	I1210 01:08:35.984585  133241 provision.go:84] configureAuth start
	I1210 01:08:35.984596  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 01:08:35.984888  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:35.987515  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.987891  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.987920  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.988078  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.990428  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.990806  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.990838  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.991028  133241 provision.go:143] copyHostCerts
	I1210 01:08:35.991108  133241 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:08:35.991125  133241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:08:35.991208  133241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:08:35.991378  133241 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:08:35.991396  133241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:08:35.991436  133241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:08:35.991548  133241 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:08:35.991560  133241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:08:35.991593  133241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:08:35.991684  133241 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-094470 san=[127.0.0.1 192.168.61.11 localhost minikube old-k8s-version-094470]
	I1210 01:08:36.166767  133241 provision.go:177] copyRemoteCerts
	I1210 01:08:36.166825  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:08:36.166872  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.169777  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.170166  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.170196  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.170452  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.170662  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.170837  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.170985  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.255600  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:08:36.277974  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1210 01:08:36.299608  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 01:08:36.320325  133241 provision.go:87] duration metric: took 335.730286ms to configureAuth
	I1210 01:08:36.320346  133241 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:08:36.320502  133241 config.go:182] Loaded profile config "old-k8s-version-094470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1210 01:08:36.320572  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.323358  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.323810  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.323836  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.324012  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.324213  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.324351  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.324479  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.324608  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:36.324773  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:36.324789  133241 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:08:36.538020  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:08:36.538052  133241 machine.go:96] duration metric: took 923.37742ms to provisionDockerMachine
	I1210 01:08:36.538065  133241 start.go:293] postStartSetup for "old-k8s-version-094470" (driver="kvm2")
	I1210 01:08:36.538075  133241 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:08:36.538092  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.538437  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:08:36.538473  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.540836  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.541187  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.541229  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.541400  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.541594  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.541728  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.541852  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.623740  133241 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:08:36.627323  133241 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:08:36.627343  133241 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:08:36.627405  133241 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:08:36.627487  133241 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:08:36.627568  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:08:36.635720  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:36.656793  133241 start.go:296] duration metric: took 118.715633ms for postStartSetup
	I1210 01:08:36.656832  133241 fix.go:56] duration metric: took 19.077955657s for fixHost
	I1210 01:08:36.656853  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.659288  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.659586  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.659618  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.659772  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.659961  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.660132  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.660250  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.660391  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:36.660552  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:36.660562  133241 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:08:36.766355  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792916.738645658
	
	I1210 01:08:36.766375  133241 fix.go:216] guest clock: 1733792916.738645658
	I1210 01:08:36.766382  133241 fix.go:229] Guest: 2024-12-10 01:08:36.738645658 +0000 UTC Remote: 2024-12-10 01:08:36.656836618 +0000 UTC m=+237.074026661 (delta=81.80904ms)
	I1210 01:08:36.766420  133241 fix.go:200] guest clock delta is within tolerance: 81.80904ms
	I1210 01:08:36.766429  133241 start.go:83] releasing machines lock for "old-k8s-version-094470", held for 19.187587757s
	I1210 01:08:36.766461  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.766761  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:36.769758  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.770129  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.770150  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.770309  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.770818  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.770992  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.771090  133241 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:08:36.771157  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.771182  133241 ssh_runner.go:195] Run: cat /version.json
	I1210 01:08:36.771203  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.773923  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774103  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774272  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.774292  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774434  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.774545  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.774585  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774616  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.774728  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.774817  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.774843  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.774975  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.775004  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.775148  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.875634  133241 ssh_runner.go:195] Run: systemctl --version
	I1210 01:08:36.880774  133241 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:08:37.023282  133241 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:08:37.029380  133241 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:08:37.029436  133241 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:08:37.044071  133241 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:08:37.044093  133241 start.go:495] detecting cgroup driver to use...
	I1210 01:08:37.044157  133241 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:08:37.058626  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:08:37.070607  133241 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:08:37.070659  133241 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:08:37.086913  133241 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:08:37.102676  133241 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:08:37.221862  133241 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:08:37.373086  133241 docker.go:233] disabling docker service ...
	I1210 01:08:37.373166  133241 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:08:37.386711  133241 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:08:37.399414  133241 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:08:37.546237  133241 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:08:37.660681  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:08:37.673736  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:08:37.690107  133241 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1210 01:08:37.690180  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.700871  133241 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:08:37.700920  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.711545  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.722078  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.732603  133241 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:08:37.743617  133241 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:08:37.753641  133241 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:08:37.753699  133241 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:08:37.765737  133241 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:08:37.774173  133241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:37.891188  133241 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:08:37.983170  133241 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:08:37.983248  133241 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:08:37.987987  133241 start.go:563] Will wait 60s for crictl version
	I1210 01:08:37.988049  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:37.993150  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:08:38.045191  133241 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:08:38.045281  133241 ssh_runner.go:195] Run: crio --version
	I1210 01:08:38.071768  133241 ssh_runner.go:195] Run: crio --version
	I1210 01:08:38.100869  133241 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1210 01:08:38.102141  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:38.104790  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:38.105112  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:38.105143  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:38.105337  133241 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1210 01:08:38.109454  133241 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:38.120925  133241 kubeadm.go:883] updating cluster {Name:old-k8s-version-094470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:08:38.121060  133241 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1210 01:08:38.121130  133241 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:38.169400  133241 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1210 01:08:38.169462  133241 ssh_runner.go:195] Run: which lz4
	I1210 01:08:38.172973  133241 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 01:08:38.176684  133241 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 01:08:38.176715  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1210 01:08:38.285566  132693 pod_ready.go:103] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:38.784437  132693 pod_ready.go:93] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:08:38.784470  132693 pod_ready.go:82] duration metric: took 7.006865777s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:38.784480  132693 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:40.791489  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:38.076463  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting to get IP...
	I1210 01:08:38.077256  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.077650  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.077706  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:38.077616  134254 retry.go:31] will retry after 287.089061ms: waiting for machine to come up
	I1210 01:08:38.366347  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.366906  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.366937  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:38.366866  134254 retry.go:31] will retry after 359.654145ms: waiting for machine to come up
	I1210 01:08:38.728592  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.729111  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.729144  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:38.729048  134254 retry.go:31] will retry after 299.617496ms: waiting for machine to come up
	I1210 01:08:39.030785  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.031359  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.031382  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:39.031312  134254 retry.go:31] will retry after 586.950887ms: waiting for machine to come up
	I1210 01:08:39.620247  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.620872  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.620903  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:39.620802  134254 retry.go:31] will retry after 623.103267ms: waiting for machine to come up
	I1210 01:08:40.245322  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.245640  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.245669  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:40.245600  134254 retry.go:31] will retry after 712.603102ms: waiting for machine to come up
	I1210 01:08:40.960316  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.960862  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.960892  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:40.960806  134254 retry.go:31] will retry after 999.356089ms: waiting for machine to come up
	I1210 01:08:41.961395  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:41.961902  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:41.961929  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:41.961862  134254 retry.go:31] will retry after 1.050049361s: waiting for machine to come up
	I1210 01:08:39.654620  133241 crio.go:462] duration metric: took 1.481673499s to copy over tarball
	I1210 01:08:39.654705  133241 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 01:08:42.473447  133241 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.818699717s)
	I1210 01:08:42.473486  133241 crio.go:469] duration metric: took 2.818833041s to extract the tarball
	I1210 01:08:42.473496  133241 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 01:08:42.514635  133241 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:42.546161  133241 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1210 01:08:42.546204  133241 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 01:08:42.546276  133241 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:08:42.546315  133241 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.546339  133241 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.546344  133241 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.546347  133241 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.546306  133241 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1210 01:08:42.546372  133241 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.546315  133241 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.548134  133241 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.548150  133241 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1210 01:08:42.548149  133241 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.548162  133241 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:08:42.548134  133241 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.548135  133241 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.548138  133241 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.548326  133241 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.700402  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.706096  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.716669  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.717025  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.723380  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.727890  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.740867  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1210 01:08:42.775300  133241 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1210 01:08:42.775345  133241 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.775393  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.827802  133241 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1210 01:08:42.827855  133241 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.827873  133241 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1210 01:08:42.827906  133241 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.827936  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.827953  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.851952  133241 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1210 01:08:42.851998  133241 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.852063  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872369  133241 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1210 01:08:42.872408  133241 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.872446  133241 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1210 01:08:42.872479  133241 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1210 01:08:42.872489  133241 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.872497  133241 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1210 01:08:42.872516  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.872535  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872535  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872458  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872578  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.872638  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.872672  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.952963  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.952964  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 01:08:42.956464  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.956535  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.956580  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.956614  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.956681  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:43.035636  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 01:08:43.086938  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:43.087032  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:43.104765  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:43.104844  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:43.104891  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 01:08:43.109871  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:43.122137  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 01:08:43.193838  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:43.256301  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:43.256342  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1210 01:08:43.256431  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1210 01:08:43.258819  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1210 01:08:43.258928  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1210 01:08:43.259011  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1210 01:08:43.281411  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1210 01:08:43.300319  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1210 01:08:43.334327  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:08:43.478183  133241 cache_images.go:92] duration metric: took 931.957836ms to LoadCachedImages
	W1210 01:08:43.478292  133241 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1210 01:08:43.478310  133241 kubeadm.go:934] updating node { 192.168.61.11 8443 v1.20.0 crio true true} ...
	I1210 01:08:43.478501  133241 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-094470 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:08:43.478610  133241 ssh_runner.go:195] Run: crio config
	I1210 01:08:43.523627  133241 cni.go:84] Creating CNI manager for ""
	I1210 01:08:43.523651  133241 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:08:43.523660  133241 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:08:43.523680  133241 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.11 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-094470 NodeName:old-k8s-version-094470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1210 01:08:43.523872  133241 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-094470"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:08:43.523947  133241 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1210 01:08:43.534926  133241 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:08:43.535015  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:08:43.544420  133241 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1210 01:08:43.561582  133241 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:08:43.578427  133241 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1210 01:08:43.595593  133241 ssh_runner.go:195] Run: grep 192.168.61.11	control-plane.minikube.internal$ /etc/hosts
	I1210 01:08:43.599137  133241 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:43.610483  133241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:43.750543  133241 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:08:43.766573  133241 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470 for IP: 192.168.61.11
	I1210 01:08:43.766599  133241 certs.go:194] generating shared ca certs ...
	I1210 01:08:43.766628  133241 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:08:43.766828  133241 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:08:43.766881  133241 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:08:43.766897  133241 certs.go:256] generating profile certs ...
	I1210 01:08:43.767022  133241 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.key
	I1210 01:08:43.767097  133241 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.key.11e7a196
	I1210 01:08:43.767158  133241 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.key
	I1210 01:08:43.767318  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:08:43.767359  133241 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:08:43.767391  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:08:43.767428  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:08:43.767461  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:08:43.767502  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:08:43.767554  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:43.768599  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:08:43.825215  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:08:43.852218  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:08:43.888256  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:08:43.921633  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1210 01:08:43.954815  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 01:08:43.986660  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:08:44.009065  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 01:08:44.030476  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:08:44.053232  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:08:44.078371  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:08:44.100076  133241 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:08:44.115731  133241 ssh_runner.go:195] Run: openssl version
	I1210 01:08:44.121192  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:08:44.130554  133241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:08:44.134639  133241 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:08:44.134697  133241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:08:44.140323  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:08:44.150593  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:08:44.160638  133241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:44.165053  133241 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:44.165121  133241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:44.170391  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:08:44.180113  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:08:44.189938  133241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:08:44.193880  133241 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:08:44.193931  133241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:08:44.199419  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:08:44.209346  133241 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:08:44.213474  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:08:44.218965  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:08:44.224344  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:08:44.229835  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:08:44.235365  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:08:44.240697  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:08:44.245999  133241 kubeadm.go:392] StartCluster: {Name:old-k8s-version-094470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:08:44.246102  133241 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:08:44.246164  133241 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:44.287050  133241 cri.go:89] found id: ""
	I1210 01:08:44.287167  133241 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:08:44.297028  133241 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:08:44.297044  133241 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:08:44.297092  133241 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:08:44.306118  133241 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:08:44.307143  133241 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-094470" does not appear in /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:08:44.307777  133241 kubeconfig.go:62] /home/jenkins/minikube-integration/20062-79135/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-094470" cluster setting kubeconfig missing "old-k8s-version-094470" context setting]
	I1210 01:08:44.308663  133241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:08:44.394164  133241 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:08:44.406683  133241 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.11
	I1210 01:08:44.406723  133241 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:08:44.406739  133241 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:08:44.406799  133241 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:44.444917  133241 cri.go:89] found id: ""
	I1210 01:08:44.444995  133241 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:08:44.465693  133241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:08:44.475399  133241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:08:44.475424  133241 kubeadm.go:157] found existing configuration files:
	
	I1210 01:08:44.475482  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:08:44.483802  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:08:44.483844  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:08:44.492395  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:08:44.501080  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:08:44.501141  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:08:44.509973  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:08:44.518103  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:08:44.518176  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:08:44.527145  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:08:44.535124  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:08:44.535179  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:08:44.543773  133241 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:08:44.552533  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:42.791894  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:45.934242  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:43.013971  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:43.014430  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:43.014467  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:43.014369  134254 retry.go:31] will retry after 1.273602138s: waiting for machine to come up
	I1210 01:08:44.289131  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:44.289686  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:44.289720  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:44.289616  134254 retry.go:31] will retry after 1.911761795s: waiting for machine to come up
	I1210 01:08:46.203851  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:46.204263  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:46.204321  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:46.204199  134254 retry.go:31] will retry after 2.653257729s: waiting for machine to come up
	I1210 01:08:44.667527  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.368529  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.572674  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.671006  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.759483  133241 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:08:45.759588  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:46.260599  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:46.759851  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:47.260403  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:47.760555  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:48.259665  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:48.760390  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:49.260450  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:48.292324  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:50.789665  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:48.859690  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:48.860078  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:48.860108  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:48.860029  134254 retry.go:31] will retry after 3.186060231s: waiting for machine to come up
	I1210 01:08:52.048071  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:52.048524  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:52.048554  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:52.048478  134254 retry.go:31] will retry after 2.823038983s: waiting for machine to come up
	I1210 01:08:49.759795  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:50.260493  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:50.760146  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:51.259783  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:51.760554  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:52.260543  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:52.760452  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:53.260523  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:53.759677  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:54.259750  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:56.158844  132605 start.go:364] duration metric: took 51.38781342s to acquireMachinesLock for "no-preload-584179"
	I1210 01:08:56.158913  132605 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:08:56.158923  132605 fix.go:54] fixHost starting: 
	I1210 01:08:56.159339  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:08:56.159381  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:08:56.178552  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34315
	I1210 01:08:56.178997  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:08:56.179471  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:08:56.179497  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:08:56.179803  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:08:56.179977  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:08:56.180119  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:08:56.181496  132605 fix.go:112] recreateIfNeeded on no-preload-584179: state=Stopped err=<nil>
	I1210 01:08:56.181521  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	W1210 01:08:56.181661  132605 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:08:56.183508  132605 out.go:177] * Restarting existing kvm2 VM for "no-preload-584179" ...
	I1210 01:08:52.790210  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:54.790515  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:56.184725  132605 main.go:141] libmachine: (no-preload-584179) Calling .Start
	I1210 01:08:56.184883  132605 main.go:141] libmachine: (no-preload-584179) Ensuring networks are active...
	I1210 01:08:56.185680  132605 main.go:141] libmachine: (no-preload-584179) Ensuring network default is active
	I1210 01:08:56.186043  132605 main.go:141] libmachine: (no-preload-584179) Ensuring network mk-no-preload-584179 is active
	I1210 01:08:56.186427  132605 main.go:141] libmachine: (no-preload-584179) Getting domain xml...
	I1210 01:08:56.187126  132605 main.go:141] libmachine: (no-preload-584179) Creating domain...
	I1210 01:08:54.875474  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.875880  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has current primary IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.875902  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Found IP for machine: 192.168.39.193
	I1210 01:08:54.875918  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Reserving static IP address...
	I1210 01:08:54.876379  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-901295", mac: "52:54:00:f7:2f:3d", ip: "192.168.39.193"} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:54.876411  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Reserved static IP address: 192.168.39.193
	I1210 01:08:54.876434  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | skip adding static IP to network mk-default-k8s-diff-port-901295 - found existing host DHCP lease matching {name: "default-k8s-diff-port-901295", mac: "52:54:00:f7:2f:3d", ip: "192.168.39.193"}
	I1210 01:08:54.876456  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Getting to WaitForSSH function...
	I1210 01:08:54.876473  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for SSH to be available...
	I1210 01:08:54.878454  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.878758  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:54.878787  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.878940  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Using SSH client type: external
	I1210 01:08:54.878969  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa (-rw-------)
	I1210 01:08:54.878993  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.193 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:08:54.879003  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | About to run SSH command:
	I1210 01:08:54.879011  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | exit 0
	I1210 01:08:55.006046  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | SSH cmd err, output: <nil>: 
	I1210 01:08:55.006394  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetConfigRaw
	I1210 01:08:55.007100  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:55.009429  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.009753  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.009803  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.010054  133282 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/config.json ...
	I1210 01:08:55.010278  133282 machine.go:93] provisionDockerMachine start ...
	I1210 01:08:55.010302  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:55.010513  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.012899  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.013198  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.013248  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.013340  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.013509  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.013643  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.013726  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.013879  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.014070  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.014081  133282 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:08:55.126262  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:08:55.126294  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetMachineName
	I1210 01:08:55.126547  133282 buildroot.go:166] provisioning hostname "default-k8s-diff-port-901295"
	I1210 01:08:55.126592  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetMachineName
	I1210 01:08:55.126756  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.129397  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.129769  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.129798  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.129921  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.130071  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.130187  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.130279  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.130380  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.130545  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.130572  133282 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-901295 && echo "default-k8s-diff-port-901295" | sudo tee /etc/hostname
	I1210 01:08:55.256829  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-901295
	
	I1210 01:08:55.256857  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.259599  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.259977  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.260006  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.260257  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.260456  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.260645  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.260795  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.260996  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.261212  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.261239  133282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-901295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-901295/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-901295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:08:55.387808  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:08:55.387837  133282 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:08:55.387872  133282 buildroot.go:174] setting up certificates
	I1210 01:08:55.387883  133282 provision.go:84] configureAuth start
	I1210 01:08:55.387897  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetMachineName
	I1210 01:08:55.388193  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:55.391297  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.391649  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.391683  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.391799  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.393859  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.394150  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.394176  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.394272  133282 provision.go:143] copyHostCerts
	I1210 01:08:55.394336  133282 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:08:55.394353  133282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:08:55.394411  133282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:08:55.394501  133282 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:08:55.394508  133282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:08:55.394530  133282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:08:55.394615  133282 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:08:55.394624  133282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:08:55.394643  133282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:08:55.394693  133282 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-901295 san=[127.0.0.1 192.168.39.193 default-k8s-diff-port-901295 localhost minikube]
	I1210 01:08:55.502253  133282 provision.go:177] copyRemoteCerts
	I1210 01:08:55.502313  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:08:55.502341  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.504919  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.505216  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.505252  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.505425  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.505613  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.505749  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.505932  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:55.593242  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:08:55.616378  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 01:08:55.638786  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 01:08:55.660268  133282 provision.go:87] duration metric: took 272.369019ms to configureAuth
	I1210 01:08:55.660293  133282 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:08:55.660506  133282 config.go:182] Loaded profile config "default-k8s-diff-port-901295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:08:55.660597  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.662964  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.663283  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.663312  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.663461  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.663656  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.663820  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.663944  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.664091  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.664330  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.664354  133282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:08:55.918356  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:08:55.918389  133282 machine.go:96] duration metric: took 908.095325ms to provisionDockerMachine
	I1210 01:08:55.918402  133282 start.go:293] postStartSetup for "default-k8s-diff-port-901295" (driver="kvm2")
	I1210 01:08:55.918415  133282 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:08:55.918450  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:55.918790  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:08:55.918823  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.921575  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.921897  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.921929  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.922026  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.922205  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.922375  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.922485  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:56.008442  133282 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:08:56.012149  133282 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:08:56.012165  133282 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:08:56.012239  133282 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:08:56.012325  133282 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:08:56.012428  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:08:56.021144  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:56.042869  133282 start.go:296] duration metric: took 124.452091ms for postStartSetup
	I1210 01:08:56.042914  133282 fix.go:56] duration metric: took 19.276278483s for fixHost
	I1210 01:08:56.042940  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:56.045280  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.045612  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.045644  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.045845  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:56.046002  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.046123  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.046224  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:56.046353  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:56.046530  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:56.046541  133282 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:08:56.158690  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792936.125620375
	
	I1210 01:08:56.158714  133282 fix.go:216] guest clock: 1733792936.125620375
	I1210 01:08:56.158722  133282 fix.go:229] Guest: 2024-12-10 01:08:56.125620375 +0000 UTC Remote: 2024-12-10 01:08:56.042918319 +0000 UTC m=+253.475376365 (delta=82.702056ms)
	I1210 01:08:56.158741  133282 fix.go:200] guest clock delta is within tolerance: 82.702056ms
	I1210 01:08:56.158746  133282 start.go:83] releasing machines lock for "default-k8s-diff-port-901295", held for 19.392149024s
	I1210 01:08:56.158769  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.159017  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:56.161998  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.162321  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.162350  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.162541  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.163022  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.163197  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.163296  133282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:08:56.163346  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:56.163449  133282 ssh_runner.go:195] Run: cat /version.json
	I1210 01:08:56.163481  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:56.166071  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.166443  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.166475  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.166500  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.166750  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:56.166897  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.166920  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.166929  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.167083  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:56.167089  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:56.167255  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.167258  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:56.167400  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:56.167529  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:56.273144  133282 ssh_runner.go:195] Run: systemctl --version
	I1210 01:08:56.278678  133282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:08:56.423921  133282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:08:56.429467  133282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:08:56.429537  133282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:08:56.443900  133282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:08:56.443927  133282 start.go:495] detecting cgroup driver to use...
	I1210 01:08:56.443996  133282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:08:56.458653  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:08:56.471717  133282 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:08:56.471798  133282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:08:56.483960  133282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:08:56.495903  133282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:08:56.604493  133282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:08:56.741771  133282 docker.go:233] disabling docker service ...
	I1210 01:08:56.741846  133282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:08:56.755264  133282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:08:56.767590  133282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:08:56.922151  133282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:08:57.045410  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:08:57.061217  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:08:57.079488  133282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 01:08:57.079552  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.090356  133282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:08:57.090434  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.100784  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.111326  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.120417  133282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:08:57.129871  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.140489  133282 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.157524  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.167947  133282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:08:57.176904  133282 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:08:57.176947  133282 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:08:57.188925  133282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:08:57.197558  133282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:57.319427  133282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:08:57.419493  133282 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:08:57.419570  133282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:08:57.424302  133282 start.go:563] Will wait 60s for crictl version
	I1210 01:08:57.424362  133282 ssh_runner.go:195] Run: which crictl
	I1210 01:08:57.428067  133282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:08:57.468247  133282 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:08:57.468319  133282 ssh_runner.go:195] Run: crio --version
	I1210 01:08:57.497834  133282 ssh_runner.go:195] Run: crio --version
	I1210 01:08:57.527032  133282 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 01:08:57.528284  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:57.531510  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:57.531882  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:57.531908  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:57.532178  133282 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 01:08:57.536149  133282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:57.548081  133282 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:08:57.548221  133282 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:08:57.548283  133282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:57.585539  133282 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 01:08:57.585619  133282 ssh_runner.go:195] Run: which lz4
	I1210 01:08:57.590131  133282 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 01:08:57.595506  133282 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 01:08:57.595534  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1210 01:08:54.760444  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:55.259774  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:55.759929  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:56.260379  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:56.759985  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:57.260495  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:57.759699  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:58.260475  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:58.759732  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:59.260424  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:57.291502  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:59.792026  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:01.793182  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:57.453911  132605 main.go:141] libmachine: (no-preload-584179) Waiting to get IP...
	I1210 01:08:57.455000  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:57.455393  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:57.455472  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:57.455384  134419 retry.go:31] will retry after 189.932045ms: waiting for machine to come up
	I1210 01:08:57.646978  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:57.647486  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:57.647520  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:57.647418  134419 retry.go:31] will retry after 278.873511ms: waiting for machine to come up
	I1210 01:08:57.928222  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:57.928797  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:57.928837  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:57.928738  134419 retry.go:31] will retry after 468.940412ms: waiting for machine to come up
	I1210 01:08:58.399469  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:58.400105  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:58.400131  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:58.400041  134419 retry.go:31] will retry after 459.796386ms: waiting for machine to come up
	I1210 01:08:58.861581  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:58.862042  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:58.862075  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:58.861985  134419 retry.go:31] will retry after 493.349488ms: waiting for machine to come up
	I1210 01:08:59.356810  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:59.357338  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:59.357365  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:59.357314  134419 retry.go:31] will retry after 736.790492ms: waiting for machine to come up
	I1210 01:09:00.095779  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:00.096246  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:00.096281  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:00.096182  134419 retry.go:31] will retry after 1.059095907s: waiting for machine to come up
	I1210 01:09:01.157286  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:01.157718  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:01.157759  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:01.157656  134419 retry.go:31] will retry after 1.18137171s: waiting for machine to come up
	I1210 01:08:58.835009  133282 crio.go:462] duration metric: took 1.24490918s to copy over tarball
	I1210 01:08:58.835108  133282 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 01:09:00.985062  133282 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.149905713s)
	I1210 01:09:00.985097  133282 crio.go:469] duration metric: took 2.150055868s to extract the tarball
	I1210 01:09:00.985108  133282 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 01:09:01.032869  133282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:09:01.074578  133282 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 01:09:01.074609  133282 cache_images.go:84] Images are preloaded, skipping loading
	I1210 01:09:01.074618  133282 kubeadm.go:934] updating node { 192.168.39.193 8444 v1.31.2 crio true true} ...
	I1210 01:09:01.074727  133282 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-901295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:09:01.074794  133282 ssh_runner.go:195] Run: crio config
	I1210 01:09:01.133905  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:09:01.133943  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:01.133965  133282 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:09:01.133999  133282 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.193 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-901295 NodeName:default-k8s-diff-port-901295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 01:09:01.134201  133282 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.193
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-901295"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.193"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.193"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:09:01.134264  133282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 01:09:01.147844  133282 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:09:01.147931  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:09:01.160432  133282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 01:09:01.180526  133282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:09:01.200698  133282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1210 01:09:01.216799  133282 ssh_runner.go:195] Run: grep 192.168.39.193	control-plane.minikube.internal$ /etc/hosts
	I1210 01:09:01.220381  133282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.193	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:09:01.233079  133282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:01.361483  133282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:09:01.380679  133282 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295 for IP: 192.168.39.193
	I1210 01:09:01.380702  133282 certs.go:194] generating shared ca certs ...
	I1210 01:09:01.380722  133282 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:01.380921  133282 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:09:01.380994  133282 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:09:01.381010  133282 certs.go:256] generating profile certs ...
	I1210 01:09:01.381136  133282 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/client.key
	I1210 01:09:01.381229  133282 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/apiserver.key.b900309b
	I1210 01:09:01.381286  133282 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/proxy-client.key
	I1210 01:09:01.381437  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:09:01.381489  133282 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:09:01.381500  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:09:01.381537  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:09:01.381568  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:09:01.381598  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:09:01.381658  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:09:01.382643  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:09:01.437062  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:09:01.472383  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:09:01.503832  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:09:01.532159  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 01:09:01.555926  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 01:09:01.578213  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:09:01.599047  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 01:09:01.620628  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:09:01.643326  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:09:01.665846  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:09:01.688854  133282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:09:01.706519  133282 ssh_runner.go:195] Run: openssl version
	I1210 01:09:01.712053  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:09:01.722297  133282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:09:01.726404  133282 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:09:01.726491  133282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:09:01.731901  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:09:01.745040  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:09:01.758663  133282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:01.763894  133282 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:01.763945  133282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:01.771019  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:09:01.781071  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:09:01.790898  133282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:09:01.795494  133282 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:09:01.795557  133282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:09:01.800996  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:09:01.811221  133282 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:09:01.815412  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:09:01.821621  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:09:01.829028  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:09:01.838361  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:09:01.844663  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:09:01.850154  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:09:01.855539  133282 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:09:01.855625  133282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:09:01.855663  133282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:01.898021  133282 cri.go:89] found id: ""
	I1210 01:09:01.898095  133282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:09:01.908929  133282 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:09:01.908947  133282 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:09:01.909005  133282 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:09:01.917830  133282 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:09:01.918982  133282 kubeconfig.go:125] found "default-k8s-diff-port-901295" server: "https://192.168.39.193:8444"
	I1210 01:09:01.921394  133282 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:09:01.930263  133282 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.193
	I1210 01:09:01.930291  133282 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:09:01.930304  133282 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:09:01.930352  133282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:01.966094  133282 cri.go:89] found id: ""
	I1210 01:09:01.966195  133282 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:09:01.983212  133282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:09:01.991944  133282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:09:01.991963  133282 kubeadm.go:157] found existing configuration files:
	
	I1210 01:09:01.992011  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 01:09:02.000043  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:09:02.000094  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:09:02.008538  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 01:09:02.016658  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:09:02.016718  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:09:02.025191  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 01:09:02.033198  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:09:02.033235  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:09:02.041713  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 01:09:02.049752  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:09:02.049801  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:09:02.058162  133282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:09:02.067001  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:02.178210  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:59.760246  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:00.260582  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:00.760701  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:01.259686  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:01.759889  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:02.260232  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:02.759769  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:03.259935  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:03.760670  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.260443  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.289731  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:06.291608  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:02.340685  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:02.341201  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:02.341233  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:02.341148  134419 retry.go:31] will retry after 1.149002375s: waiting for machine to come up
	I1210 01:09:03.491439  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:03.491777  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:03.491803  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:03.491742  134419 retry.go:31] will retry after 2.260301884s: waiting for machine to come up
	I1210 01:09:05.753701  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:05.754207  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:05.754245  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:05.754151  134419 retry.go:31] will retry after 2.19021466s: waiting for machine to come up
	I1210 01:09:03.022068  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:03.230465  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:03.288423  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:03.380544  133282 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:09:03.380653  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:03.881388  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.381638  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.881652  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.380981  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.394784  133282 api_server.go:72] duration metric: took 2.014238708s to wait for apiserver process to appear ...
	I1210 01:09:05.394817  133282 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:09:05.394854  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:07.865790  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:07.865818  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:07.865831  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:07.881775  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:07.881807  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:07.894896  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:07.914874  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:07.914905  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:08.395143  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:08.404338  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:08.404370  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:08.895743  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:08.906401  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:08.906439  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:09.394905  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:09.400326  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 200:
	ok
	I1210 01:09:09.411040  133282 api_server.go:141] control plane version: v1.31.2
	I1210 01:09:09.411080  133282 api_server.go:131] duration metric: took 4.016246339s to wait for apiserver health ...
	I1210 01:09:09.411090  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:09:09.411096  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:09.412738  133282 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:09:04.760421  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.260154  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.760313  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:06.259902  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:06.760365  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:07.260060  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:07.759720  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:08.260052  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:08.759734  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:09.260736  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:08.291848  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:10.790539  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:07.946992  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:07.947528  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:07.947561  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:07.947474  134419 retry.go:31] will retry after 3.212306699s: waiting for machine to come up
	I1210 01:09:11.163716  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:11.164132  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:11.164163  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:11.164092  134419 retry.go:31] will retry after 3.275164589s: waiting for machine to come up
	I1210 01:09:09.413907  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:09:09.423631  133282 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:09:09.440030  133282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:09:09.449054  133282 system_pods.go:59] 8 kube-system pods found
	I1210 01:09:09.449081  133282 system_pods.go:61] "coredns-7c65d6cfc9-qbdpj" [eec04b43-145a-4cae-9085-185b573be507] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 01:09:09.449088  133282 system_pods.go:61] "etcd-default-k8s-diff-port-901295" [c8c570b0-2e66-4cf5-bed6-20ee655ad679] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 01:09:09.449100  133282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-901295" [42b2ad48-8b92-4ba4-8a14-6c3e6bdec4a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 01:09:09.449116  133282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-901295" [bd2c0e9d-cb31-46a5-b12e-ab70ed05c8e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 01:09:09.449127  133282 system_pods.go:61] "kube-proxy-5szz9" [957bab4d-6329-41b4-9980-aaa17133201e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 01:09:09.449135  133282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-901295" [1729b062-1bfe-447f-b9ed-29813c7f056a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 01:09:09.449144  133282 system_pods.go:61] "metrics-server-6867b74b74-zpj2g" [cdfb5b8e-5b7f-4fc8-8ad8-07ea92f7f737] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:09:09.449150  133282 system_pods.go:61] "storage-provisioner" [342f814b-f510-4a3b-b27d-52ebbdf85275] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 01:09:09.449159  133282 system_pods.go:74] duration metric: took 9.110007ms to wait for pod list to return data ...
	I1210 01:09:09.449168  133282 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:09:09.452778  133282 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:09:09.452806  133282 node_conditions.go:123] node cpu capacity is 2
	I1210 01:09:09.452818  133282 node_conditions.go:105] duration metric: took 3.643268ms to run NodePressure ...
	I1210 01:09:09.452837  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:09.728171  133282 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 01:09:09.732074  133282 kubeadm.go:739] kubelet initialised
	I1210 01:09:09.732096  133282 kubeadm.go:740] duration metric: took 3.900542ms waiting for restarted kubelet to initialise ...
	I1210 01:09:09.732106  133282 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:09.736406  133282 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.740516  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.740534  133282 pod_ready.go:82] duration metric: took 4.104848ms for pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.740543  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.740549  133282 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.744293  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.744311  133282 pod_ready.go:82] duration metric: took 3.755781ms for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.744321  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.744326  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.748023  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.748045  133282 pod_ready.go:82] duration metric: took 3.712559ms for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.748062  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.748070  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.843581  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.843607  133282 pod_ready.go:82] duration metric: took 95.52817ms for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.843621  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.843632  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5szz9" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:10.242986  133282 pod_ready.go:93] pod "kube-proxy-5szz9" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:10.243015  133282 pod_ready.go:82] duration metric: took 399.37468ms for pod "kube-proxy-5szz9" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:10.243025  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:12.249815  133282 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:09.760075  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:10.259970  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:10.760547  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:11.259999  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:11.760315  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:12.260121  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:12.760217  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:13.259996  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:13.760635  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:14.259738  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:13.290686  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:15.792057  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:14.440802  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.441315  132605 main.go:141] libmachine: (no-preload-584179) Found IP for machine: 192.168.50.169
	I1210 01:09:14.441338  132605 main.go:141] libmachine: (no-preload-584179) Reserving static IP address...
	I1210 01:09:14.441355  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has current primary IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.441776  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "no-preload-584179", mac: "52:54:00:94:5e:a7", ip: "192.168.50.169"} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.441830  132605 main.go:141] libmachine: (no-preload-584179) DBG | skip adding static IP to network mk-no-preload-584179 - found existing host DHCP lease matching {name: "no-preload-584179", mac: "52:54:00:94:5e:a7", ip: "192.168.50.169"}
	I1210 01:09:14.441847  132605 main.go:141] libmachine: (no-preload-584179) Reserved static IP address: 192.168.50.169
	I1210 01:09:14.441867  132605 main.go:141] libmachine: (no-preload-584179) Waiting for SSH to be available...
	I1210 01:09:14.441882  132605 main.go:141] libmachine: (no-preload-584179) DBG | Getting to WaitForSSH function...
	I1210 01:09:14.444063  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.444360  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.444397  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.444510  132605 main.go:141] libmachine: (no-preload-584179) DBG | Using SSH client type: external
	I1210 01:09:14.444531  132605 main.go:141] libmachine: (no-preload-584179) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa (-rw-------)
	I1210 01:09:14.444565  132605 main.go:141] libmachine: (no-preload-584179) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:09:14.444579  132605 main.go:141] libmachine: (no-preload-584179) DBG | About to run SSH command:
	I1210 01:09:14.444594  132605 main.go:141] libmachine: (no-preload-584179) DBG | exit 0
	I1210 01:09:14.571597  132605 main.go:141] libmachine: (no-preload-584179) DBG | SSH cmd err, output: <nil>: 
	I1210 01:09:14.571997  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetConfigRaw
	I1210 01:09:14.572831  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:14.576075  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.576525  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.576559  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.576843  132605 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/config.json ...
	I1210 01:09:14.577023  132605 machine.go:93] provisionDockerMachine start ...
	I1210 01:09:14.577043  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:14.577263  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.579535  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.579894  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.579925  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.580191  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:14.580426  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.580579  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.580742  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:14.580901  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:14.581081  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:14.581092  132605 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:09:14.699453  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:09:14.699485  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:09:14.699734  132605 buildroot.go:166] provisioning hostname "no-preload-584179"
	I1210 01:09:14.699766  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:09:14.700011  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.703169  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.703570  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.703597  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.703742  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:14.703967  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.704170  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.704395  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:14.704582  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:14.704802  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:14.704825  132605 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-584179 && echo "no-preload-584179" | sudo tee /etc/hostname
	I1210 01:09:14.836216  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-584179
	
	I1210 01:09:14.836259  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.839077  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.839502  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.839536  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.839752  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:14.839958  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.840127  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.840304  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:14.840534  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:14.840766  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:14.840793  132605 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-584179' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-584179/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-584179' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:09:14.965138  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:09:14.965175  132605 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:09:14.965246  132605 buildroot.go:174] setting up certificates
	I1210 01:09:14.965268  132605 provision.go:84] configureAuth start
	I1210 01:09:14.965287  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:09:14.965570  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:14.968666  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.969081  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.969116  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.969264  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.971772  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.972144  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.972169  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.972337  132605 provision.go:143] copyHostCerts
	I1210 01:09:14.972403  132605 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:09:14.972428  132605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:09:14.972492  132605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:09:14.972648  132605 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:09:14.972663  132605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:09:14.972698  132605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:09:14.972790  132605 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:09:14.972803  132605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:09:14.972836  132605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:09:14.972915  132605 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.no-preload-584179 san=[127.0.0.1 192.168.50.169 localhost minikube no-preload-584179]
	I1210 01:09:15.113000  132605 provision.go:177] copyRemoteCerts
	I1210 01:09:15.113067  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:09:15.113100  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.115838  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.116216  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.116243  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.116422  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.116590  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.116726  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.116820  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.199896  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:09:15.225440  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 01:09:15.250028  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 01:09:15.274086  132605 provision.go:87] duration metric: took 308.801497ms to configureAuth
	I1210 01:09:15.274127  132605 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:09:15.274298  132605 config.go:182] Loaded profile config "no-preload-584179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:09:15.274390  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.277149  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.277509  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.277539  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.277682  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.277842  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.277999  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.278110  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.278260  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:15.278438  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:15.278454  132605 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:09:15.504997  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:09:15.505080  132605 machine.go:96] duration metric: took 928.040946ms to provisionDockerMachine
	I1210 01:09:15.505103  132605 start.go:293] postStartSetup for "no-preload-584179" (driver="kvm2")
	I1210 01:09:15.505118  132605 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:09:15.505150  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.505498  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:09:15.505532  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.508802  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.509247  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.509324  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.509448  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.509674  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.509840  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.509985  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.597115  132605 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:09:15.602107  132605 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:09:15.602135  132605 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:09:15.602226  132605 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:09:15.602330  132605 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:09:15.602453  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:09:15.611320  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:09:15.633173  132605 start.go:296] duration metric: took 128.055577ms for postStartSetup
	I1210 01:09:15.633214  132605 fix.go:56] duration metric: took 19.474291224s for fixHost
	I1210 01:09:15.633234  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.635888  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.636254  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.636298  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.636472  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.636655  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.636827  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.636941  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.637115  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:15.637284  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:15.637295  132605 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:09:15.746834  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792955.705138377
	
	I1210 01:09:15.746862  132605 fix.go:216] guest clock: 1733792955.705138377
	I1210 01:09:15.746873  132605 fix.go:229] Guest: 2024-12-10 01:09:15.705138377 +0000 UTC Remote: 2024-12-10 01:09:15.6332178 +0000 UTC m=+353.450037611 (delta=71.920577ms)
	I1210 01:09:15.746899  132605 fix.go:200] guest clock delta is within tolerance: 71.920577ms
	I1210 01:09:15.746915  132605 start.go:83] releasing machines lock for "no-preload-584179", held for 19.588029336s
	I1210 01:09:15.746945  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.747285  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:15.750451  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.750900  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.750929  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.751162  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.751698  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.751882  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.751964  132605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:09:15.752035  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.752082  132605 ssh_runner.go:195] Run: cat /version.json
	I1210 01:09:15.752104  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.754825  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755065  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755249  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.755269  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755457  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.755549  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.755585  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755624  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.755718  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.755807  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.755929  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.755997  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.756266  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.756431  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.834820  132605 ssh_runner.go:195] Run: systemctl --version
	I1210 01:09:15.859263  132605 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:09:16.006149  132605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:09:16.012040  132605 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:09:16.012116  132605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:09:16.026410  132605 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:09:16.026435  132605 start.go:495] detecting cgroup driver to use...
	I1210 01:09:16.026508  132605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:09:16.040833  132605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:09:16.053355  132605 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:09:16.053404  132605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:09:16.066169  132605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:09:16.078906  132605 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:09:16.183645  132605 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:09:16.338131  132605 docker.go:233] disabling docker service ...
	I1210 01:09:16.338210  132605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:09:16.353706  132605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:09:16.367025  132605 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:09:16.490857  132605 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:09:16.599213  132605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:09:16.612423  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:09:16.628989  132605 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 01:09:16.629051  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.638381  132605 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:09:16.638443  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.648140  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.657702  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.667303  132605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:09:16.677058  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.686261  132605 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.701267  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.710630  132605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:09:16.719338  132605 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:09:16.719399  132605 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:09:16.730675  132605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:09:16.739704  132605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:16.855267  132605 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:09:16.945551  132605 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:09:16.945636  132605 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:09:16.950041  132605 start.go:563] Will wait 60s for crictl version
	I1210 01:09:16.950089  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:16.953415  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:09:16.986363  132605 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:09:16.986452  132605 ssh_runner.go:195] Run: crio --version
	I1210 01:09:17.013313  132605 ssh_runner.go:195] Run: crio --version
	I1210 01:09:17.040732  132605 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 01:09:17.042078  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:17.044697  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:17.044992  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:17.045017  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:17.045180  132605 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1210 01:09:17.048776  132605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:09:17.059862  132605 kubeadm.go:883] updating cluster {Name:no-preload-584179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-584179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:09:17.059969  132605 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:09:17.060002  132605 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:09:17.092954  132605 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 01:09:17.092981  132605 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 01:09:17.093021  132605 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:17.093063  132605 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.093076  132605 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.093096  132605 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1210 01:09:17.093157  132605 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.093084  132605 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.093235  132605 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.093250  132605 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.094742  132605 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1210 01:09:17.094787  132605 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.094804  132605 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.094742  132605 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.094810  132605 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:17.094753  132605 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.094820  132605 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.094765  132605 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:14.765671  133282 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:15.750454  133282 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:15.750473  133282 pod_ready.go:82] duration metric: took 5.507439947s for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:15.750486  133282 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:14.759976  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:15.259717  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:15.760410  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:16.260034  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:16.759708  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:17.260433  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:17.760687  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:18.260284  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:18.760557  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:19.260362  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:18.290233  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:20.291198  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:17.246846  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.248658  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.250095  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.254067  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.256089  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.278344  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1210 01:09:17.278473  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.369439  132605 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1210 01:09:17.369501  132605 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.369501  132605 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1210 01:09:17.369540  132605 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.369553  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.369604  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.417953  132605 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1210 01:09:17.418006  132605 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.418052  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.423233  132605 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1210 01:09:17.423274  132605 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1210 01:09:17.423281  132605 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.423306  132605 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.423326  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.423352  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.423429  132605 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1210 01:09:17.423469  132605 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.423503  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.505918  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.505973  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.505933  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.506033  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.506057  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.506093  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.622808  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.635839  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.637443  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.637478  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.637587  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.637611  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.688747  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.768097  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.768175  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.768211  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.768320  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.768313  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.805141  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1210 01:09:17.805252  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1210 01:09:17.885468  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1210 01:09:17.885628  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1210 01:09:17.893263  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1210 01:09:17.893312  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1210 01:09:17.893335  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1210 01:09:17.893381  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 01:09:17.893399  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1210 01:09:17.893411  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 01:09:17.893417  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 01:09:17.893464  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1210 01:09:17.893479  132605 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1210 01:09:17.893454  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 01:09:17.893518  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1210 01:09:17.895148  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1210 01:09:18.009923  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:21.497870  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.604325674s)
	I1210 01:09:21.497908  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1210 01:09:21.497931  132605 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1210 01:09:21.497925  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (3.604515411s)
	I1210 01:09:21.497964  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2: (3.604523853s)
	I1210 01:09:21.497980  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1210 01:09:21.497988  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1210 01:09:21.497968  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1210 01:09:21.498030  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (3.604504871s)
	I1210 01:09:21.498065  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1210 01:09:21.498092  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (3.604626001s)
	I1210 01:09:21.498135  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1210 01:09:21.498137  132605 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.48818734s)
	I1210 01:09:21.498180  132605 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 01:09:21.498210  132605 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:21.498262  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.758044  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:20.257446  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:19.759901  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:20.260224  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:20.760523  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:21.259846  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:21.759997  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:22.259939  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:22.760414  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:23.260359  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:23.760075  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:24.260519  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:22.291428  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:24.291612  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:26.791400  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:23.369885  132605 ssh_runner.go:235] Completed: which crictl: (1.871582184s)
	I1210 01:09:23.369947  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.871938064s)
	I1210 01:09:23.369967  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1210 01:09:23.369976  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:23.370000  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 01:09:23.370042  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 01:09:25.661942  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.291860829s)
	I1210 01:09:25.661984  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1210 01:09:25.661990  132605 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.291995779s)
	I1210 01:09:25.662011  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 01:09:25.662066  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 01:09:25.662066  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:27.025354  132605 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.36318975s)
	I1210 01:09:27.025446  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:27.025517  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.363423006s)
	I1210 01:09:27.025546  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1210 01:09:27.025604  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 01:09:27.025677  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 01:09:27.063571  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 01:09:27.063700  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 01:09:22.757215  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:24.757584  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:27.256535  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:24.760537  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:25.259994  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:25.760205  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:26.260504  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:26.759648  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:27.259995  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:27.760383  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:28.259992  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:28.760004  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:29.260496  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:28.813963  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:30.837175  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:29.106253  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.080542846s)
	I1210 01:09:29.106295  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1210 01:09:29.106312  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.042586527s)
	I1210 01:09:29.106326  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 01:09:29.106345  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1210 01:09:29.106392  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 01:09:30.968622  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.862203504s)
	I1210 01:09:30.968650  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1210 01:09:30.968679  132605 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 01:09:30.968732  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 01:09:31.612519  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 01:09:31.612559  132605 cache_images.go:123] Successfully loaded all cached images
	I1210 01:09:31.612564  132605 cache_images.go:92] duration metric: took 14.519573158s to LoadCachedImages
	I1210 01:09:31.612577  132605 kubeadm.go:934] updating node { 192.168.50.169 8443 v1.31.2 crio true true} ...
	I1210 01:09:31.612686  132605 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-584179 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-584179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:09:31.612750  132605 ssh_runner.go:195] Run: crio config
	I1210 01:09:31.661155  132605 cni.go:84] Creating CNI manager for ""
	I1210 01:09:31.661185  132605 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:31.661199  132605 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:09:31.661228  132605 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.169 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-584179 NodeName:no-preload-584179 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.169"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.169 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 01:09:31.661406  132605 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.169
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-584179"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.169"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.169"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:09:31.661511  132605 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 01:09:31.671185  132605 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:09:31.671259  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:09:31.679776  132605 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1210 01:09:31.694290  132605 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:09:31.708644  132605 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1210 01:09:31.725292  132605 ssh_runner.go:195] Run: grep 192.168.50.169	control-plane.minikube.internal$ /etc/hosts
	I1210 01:09:31.729070  132605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.169	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:09:31.740077  132605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:31.857074  132605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:09:31.872257  132605 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179 for IP: 192.168.50.169
	I1210 01:09:31.872280  132605 certs.go:194] generating shared ca certs ...
	I1210 01:09:31.872314  132605 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:31.872515  132605 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:09:31.872569  132605 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:09:31.872579  132605 certs.go:256] generating profile certs ...
	I1210 01:09:31.872694  132605 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/client.key
	I1210 01:09:31.872775  132605 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/apiserver.key.0a939830
	I1210 01:09:31.872828  132605 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/proxy-client.key
	I1210 01:09:31.872979  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:09:31.873020  132605 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:09:31.873034  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:09:31.873069  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:09:31.873098  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:09:31.873127  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:09:31.873188  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:09:31.874099  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:09:31.906792  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:09:31.939994  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:09:31.965628  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:09:31.992020  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 01:09:32.015601  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 01:09:32.048113  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:09:32.069416  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 01:09:32.090144  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:09:32.111484  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:09:32.135390  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:09:32.157978  132605 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:09:32.173851  132605 ssh_runner.go:195] Run: openssl version
	I1210 01:09:32.179068  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:09:32.188602  132605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:09:32.192585  132605 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:09:32.192629  132605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:09:32.197637  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:09:32.207401  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:09:32.216700  132605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:29.756368  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:31.756948  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:29.760244  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:30.260534  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:30.760426  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:31.259767  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:31.759951  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:32.259919  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:32.760161  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:33.260272  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:33.759885  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:34.259970  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:33.290818  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:35.790889  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:32.220620  132605 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:32.220663  132605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:32.225661  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:09:32.235325  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:09:32.244746  132605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:09:32.248733  132605 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:09:32.248774  132605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:09:32.254022  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:09:32.264208  132605 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:09:32.268332  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:09:32.273902  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:09:32.279525  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:09:32.284958  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:09:32.291412  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:09:32.296527  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:09:32.302123  132605 kubeadm.go:392] StartCluster: {Name:no-preload-584179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-584179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:09:32.302233  132605 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:09:32.302293  132605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:32.345135  132605 cri.go:89] found id: ""
	I1210 01:09:32.345212  132605 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:09:32.355077  132605 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:09:32.355093  132605 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:09:32.355131  132605 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:09:32.364021  132605 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:09:32.365012  132605 kubeconfig.go:125] found "no-preload-584179" server: "https://192.168.50.169:8443"
	I1210 01:09:32.367348  132605 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:09:32.375938  132605 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.169
	I1210 01:09:32.375967  132605 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:09:32.375979  132605 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:09:32.376032  132605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:32.408948  132605 cri.go:89] found id: ""
	I1210 01:09:32.409014  132605 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:09:32.427628  132605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:09:32.437321  132605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:09:32.437348  132605 kubeadm.go:157] found existing configuration files:
	
	I1210 01:09:32.437391  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:09:32.446114  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:09:32.446155  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:09:32.455531  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:09:32.465558  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:09:32.465611  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:09:32.475265  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:09:32.483703  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:09:32.483750  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:09:32.492041  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:09:32.499895  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:09:32.499948  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:09:32.508205  132605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:09:32.516625  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:32.628252  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:33.675979  132605 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.04768244s)
	I1210 01:09:33.676029  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:33.873465  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:33.951722  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:34.064512  132605 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:09:34.064627  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:34.565753  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.065163  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.104915  132605 api_server.go:72] duration metric: took 1.040405424s to wait for apiserver process to appear ...
	I1210 01:09:35.104944  132605 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:09:35.104970  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:35.105426  132605 api_server.go:269] stopped: https://192.168.50.169:8443/healthz: Get "https://192.168.50.169:8443/healthz": dial tcp 192.168.50.169:8443: connect: connection refused
	I1210 01:09:35.606063  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:34.256982  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:36.756184  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:38.326687  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:38.326719  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:38.326736  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:38.400207  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:38.400236  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:38.605572  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:38.610811  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:38.610849  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:39.105424  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:39.117268  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:39.117303  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:39.605417  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:39.614444  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 200:
	ok
	I1210 01:09:39.620993  132605 api_server.go:141] control plane version: v1.31.2
	I1210 01:09:39.621020  132605 api_server.go:131] duration metric: took 4.51606815s to wait for apiserver health ...
	I1210 01:09:39.621032  132605 cni.go:84] Creating CNI manager for ""
	I1210 01:09:39.621041  132605 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:34.759835  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.260276  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.759791  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:36.259684  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:36.760649  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:37.259922  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:37.760558  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:38.260712  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:38.759679  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:39.259678  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:39.622539  132605 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:09:39.623685  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:09:39.643844  132605 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:09:39.678622  132605 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:09:39.692082  132605 system_pods.go:59] 8 kube-system pods found
	I1210 01:09:39.692124  132605 system_pods.go:61] "coredns-7c65d6cfc9-hhsm5" [dddb227a-7c16-4acd-be5f-1ab38b78129c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 01:09:39.692133  132605 system_pods.go:61] "etcd-no-preload-584179" [acba2a13-196f-4ff9-8151-6b88578d532d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 01:09:39.692141  132605 system_pods.go:61] "kube-apiserver-no-preload-584179" [20a67076-4ff2-4f31-b245-bf4079cd11d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 01:09:39.692149  132605 system_pods.go:61] "kube-controller-manager-no-preload-584179" [d7a2e531-1609-4c8a-b756-d9819339ed27] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 01:09:39.692154  132605 system_pods.go:61] "kube-proxy-xcjs2" [ec6cf5b1-3ea9-4868-874d-61e262cca0c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 01:09:39.692162  132605 system_pods.go:61] "kube-scheduler-no-preload-584179" [998543da-3056-4960-8762-9ab3dbd1925a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 01:09:39.692174  132605 system_pods.go:61] "metrics-server-6867b74b74-lwgxd" [0e7f1063-8508-4f5b-b8ff-bbd387a53919] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:09:39.692183  132605 system_pods.go:61] "storage-provisioner" [31180637-f48e-4dda-8ec3-56155bb300cf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 01:09:39.692200  132605 system_pods.go:74] duration metric: took 13.548523ms to wait for pod list to return data ...
	I1210 01:09:39.692214  132605 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:09:39.696707  132605 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:09:39.696740  132605 node_conditions.go:123] node cpu capacity is 2
	I1210 01:09:39.696754  132605 node_conditions.go:105] duration metric: took 4.534393ms to run NodePressure ...
	I1210 01:09:39.696781  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:39.977595  132605 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 01:09:39.981694  132605 kubeadm.go:739] kubelet initialised
	I1210 01:09:39.981714  132605 kubeadm.go:740] duration metric: took 4.094235ms waiting for restarted kubelet to initialise ...
	I1210 01:09:39.981724  132605 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:39.987484  132605 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:39.992414  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.992434  132605 pod_ready.go:82] duration metric: took 4.925954ms for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:39.992442  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.992448  132605 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:39.996262  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "etcd-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.996291  132605 pod_ready.go:82] duration metric: took 3.826925ms for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:39.996301  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "etcd-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.996309  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.000642  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-apiserver-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.000659  132605 pod_ready.go:82] duration metric: took 4.340955ms for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.000668  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-apiserver-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.000676  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.082165  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.082191  132605 pod_ready.go:82] duration metric: took 81.505218ms for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.082204  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.082214  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.483273  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-proxy-xcjs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.483306  132605 pod_ready.go:82] duration metric: took 401.082947ms for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.483318  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-proxy-xcjs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.483329  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.882587  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-scheduler-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.882617  132605 pod_ready.go:82] duration metric: took 399.278598ms for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.882629  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-scheduler-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.882641  132605 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:41.281474  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:41.281502  132605 pod_ready.go:82] duration metric: took 398.850415ms for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:41.281516  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:41.281526  132605 pod_ready.go:39] duration metric: took 1.299793175s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:41.281547  132605 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 01:09:41.293293  132605 ops.go:34] apiserver oom_adj: -16
	I1210 01:09:41.293310  132605 kubeadm.go:597] duration metric: took 8.938211553s to restartPrimaryControlPlane
	I1210 01:09:41.293318  132605 kubeadm.go:394] duration metric: took 8.991203373s to StartCluster
	I1210 01:09:41.293334  132605 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:41.293389  132605 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:09:41.295054  132605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:41.295293  132605 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 01:09:41.295376  132605 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 01:09:41.295496  132605 addons.go:69] Setting storage-provisioner=true in profile "no-preload-584179"
	I1210 01:09:41.295519  132605 addons.go:234] Setting addon storage-provisioner=true in "no-preload-584179"
	W1210 01:09:41.295529  132605 addons.go:243] addon storage-provisioner should already be in state true
	I1210 01:09:41.295527  132605 config.go:182] Loaded profile config "no-preload-584179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:09:41.295581  132605 host.go:66] Checking if "no-preload-584179" exists ...
	I1210 01:09:41.295588  132605 addons.go:69] Setting metrics-server=true in profile "no-preload-584179"
	I1210 01:09:41.295602  132605 addons.go:234] Setting addon metrics-server=true in "no-preload-584179"
	I1210 01:09:41.295604  132605 addons.go:69] Setting default-storageclass=true in profile "no-preload-584179"
	W1210 01:09:41.295615  132605 addons.go:243] addon metrics-server should already be in state true
	I1210 01:09:41.295627  132605 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-584179"
	I1210 01:09:41.295643  132605 host.go:66] Checking if "no-preload-584179" exists ...
	I1210 01:09:41.295906  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.295951  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.296035  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.296052  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.296089  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.296134  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.296994  132605 out.go:177] * Verifying Kubernetes components...
	I1210 01:09:41.298351  132605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:41.312841  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32935
	I1210 01:09:41.313326  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.313883  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.313906  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.314202  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.314798  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.314846  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.316718  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45347
	I1210 01:09:41.317263  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.317829  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.317857  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.318269  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.318870  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.318916  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.329929  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45969
	I1210 01:09:41.330341  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.330879  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.330894  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.331331  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.331505  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.332041  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43911
	I1210 01:09:41.332457  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.333084  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.333107  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.333516  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.333728  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.335268  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42073
	I1210 01:09:41.336123  132605 addons.go:234] Setting addon default-storageclass=true in "no-preload-584179"
	W1210 01:09:41.336137  132605 addons.go:243] addon default-storageclass should already be in state true
	I1210 01:09:41.336161  132605 host.go:66] Checking if "no-preload-584179" exists ...
	I1210 01:09:41.336395  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.336422  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.336596  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.336686  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:41.337074  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.337088  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.337468  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.337656  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.338494  132605 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:41.339130  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:41.339843  132605 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:09:41.339856  132605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 01:09:41.339870  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:41.341253  132605 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 01:09:37.793895  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:40.291282  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:41.342436  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.342604  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 01:09:41.342620  132605 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 01:09:41.342633  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:41.342844  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:41.342861  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.343122  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:41.343399  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:41.343569  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:41.343683  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:41.345344  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.345814  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:41.345834  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.345982  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:41.346159  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:41.346293  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:41.346431  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:41.352593  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38283
	I1210 01:09:41.352930  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.353292  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.353307  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.353545  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.354016  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.354045  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.370168  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46765
	I1210 01:09:41.370736  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.371289  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.371315  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.371670  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.371879  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.373679  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:41.374802  132605 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 01:09:41.374821  132605 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 01:09:41.374841  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:41.377611  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.378065  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:41.378089  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.378261  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:41.378411  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:41.378571  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:41.378711  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:41.492956  132605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:09:41.510713  132605 node_ready.go:35] waiting up to 6m0s for node "no-preload-584179" to be "Ready" ...
	I1210 01:09:41.591523  132605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:09:41.612369  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 01:09:41.612393  132605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 01:09:41.641040  132605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 01:09:41.672955  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 01:09:41.672982  132605 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 01:09:41.720885  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:09:41.720921  132605 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 01:09:41.773885  132605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:09:39.256804  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:41.758321  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:42.945125  132605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.304042618s)
	I1210 01:09:42.945192  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945207  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945233  132605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.171304002s)
	I1210 01:09:42.945292  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945310  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945452  132605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.353900883s)
	I1210 01:09:42.945476  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945488  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945543  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.945556  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.945587  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.945601  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.945609  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945616  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945819  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.945847  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.945832  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.945856  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945863  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945897  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.945907  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.945916  132605 addons.go:475] Verifying addon metrics-server=true in "no-preload-584179"
	I1210 01:09:42.945926  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.946083  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.946115  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.946120  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.946659  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.946679  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.946690  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.946699  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.946960  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.946976  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.954783  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.954805  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.955037  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.955056  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.955101  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.956592  132605 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1210 01:09:39.759613  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:40.260466  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:40.760527  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:41.260450  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:41.759950  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:42.260075  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:42.760661  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:43.259780  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:43.759690  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:44.260376  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:42.791249  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:45.290804  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:42.957891  132605 addons.go:510] duration metric: took 1.66252058s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I1210 01:09:43.514278  132605 node_ready.go:53] node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:45.514855  132605 node_ready.go:53] node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:44.256730  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:46.257699  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:44.759802  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:45.260533  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:45.760410  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:45.760500  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:45.797499  133241 cri.go:89] found id: ""
	I1210 01:09:45.797522  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.797533  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:45.797539  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:45.797596  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:45.827841  133241 cri.go:89] found id: ""
	I1210 01:09:45.827872  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.827885  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:45.827893  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:45.827952  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:45.861227  133241 cri.go:89] found id: ""
	I1210 01:09:45.861251  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.861259  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:45.861264  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:45.861331  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:45.895142  133241 cri.go:89] found id: ""
	I1210 01:09:45.895174  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.895185  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:45.895191  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:45.895266  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:45.931113  133241 cri.go:89] found id: ""
	I1210 01:09:45.931146  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.931157  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:45.931164  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:45.931251  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:45.964348  133241 cri.go:89] found id: ""
	I1210 01:09:45.964388  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.964396  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:45.964402  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:45.964453  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:45.997808  133241 cri.go:89] found id: ""
	I1210 01:09:45.997829  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.997837  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:45.997842  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:45.997888  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:46.028464  133241 cri.go:89] found id: ""
	I1210 01:09:46.028490  133241 logs.go:282] 0 containers: []
	W1210 01:09:46.028499  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:46.028508  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:46.028524  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:46.136225  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:46.136257  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:46.136275  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:46.211654  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:46.211686  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:46.254008  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:46.254046  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:46.305985  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:46.306020  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:48.818889  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:48.831511  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:48.831575  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:48.863536  133241 cri.go:89] found id: ""
	I1210 01:09:48.863566  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.863577  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:48.863585  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:48.863642  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:48.895340  133241 cri.go:89] found id: ""
	I1210 01:09:48.895362  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.895371  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:48.895378  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:48.895439  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:48.930962  133241 cri.go:89] found id: ""
	I1210 01:09:48.930989  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.930997  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:48.931003  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:48.931060  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:48.966437  133241 cri.go:89] found id: ""
	I1210 01:09:48.966467  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.966479  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:48.966488  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:48.966553  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:49.001290  133241 cri.go:89] found id: ""
	I1210 01:09:49.001321  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.001333  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:49.001340  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:49.001404  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:49.036472  133241 cri.go:89] found id: ""
	I1210 01:09:49.036499  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.036510  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:49.036532  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:49.036609  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:49.066550  133241 cri.go:89] found id: ""
	I1210 01:09:49.066589  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.066600  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:49.066607  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:49.066669  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:49.097358  133241 cri.go:89] found id: ""
	I1210 01:09:49.097383  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.097392  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:49.097402  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:49.097413  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:49.170082  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:49.170116  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:49.209684  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:49.209747  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:49.268714  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:49.268755  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:49.281979  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:49.282014  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:49.350901  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:47.790228  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:49.791158  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:48.014087  132605 node_ready.go:53] node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:49.014932  132605 node_ready.go:49] node "no-preload-584179" has status "Ready":"True"
	I1210 01:09:49.014960  132605 node_ready.go:38] duration metric: took 7.504211405s for node "no-preload-584179" to be "Ready" ...
	I1210 01:09:49.014974  132605 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:49.020519  132605 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:49.025466  132605 pod_ready.go:93] pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:49.025489  132605 pod_ready.go:82] duration metric: took 4.945455ms for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:49.025501  132605 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.031580  132605 pod_ready.go:103] pod "etcd-no-preload-584179" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:51.532544  132605 pod_ready.go:93] pod "etcd-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.532570  132605 pod_ready.go:82] duration metric: took 2.507060173s for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.532582  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.537498  132605 pod_ready.go:93] pod "kube-apiserver-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.537516  132605 pod_ready.go:82] duration metric: took 4.927374ms for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.537525  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.542147  132605 pod_ready.go:93] pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.542161  132605 pod_ready.go:82] duration metric: took 4.630752ms for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.542169  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.546645  132605 pod_ready.go:93] pod "kube-proxy-xcjs2" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.546660  132605 pod_ready.go:82] duration metric: took 4.486291ms for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.546667  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.815308  132605 pod_ready.go:93] pod "kube-scheduler-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.815333  132605 pod_ready.go:82] duration metric: took 268.661005ms for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.815343  132605 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:48.756571  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:51.256434  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:51.851559  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:51.864804  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:51.864862  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:51.907102  133241 cri.go:89] found id: ""
	I1210 01:09:51.907141  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.907154  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:51.907162  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:51.907218  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:51.937672  133241 cri.go:89] found id: ""
	I1210 01:09:51.937695  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.937702  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:51.937708  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:51.937755  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:51.966886  133241 cri.go:89] found id: ""
	I1210 01:09:51.966911  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.966919  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:51.966925  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:51.966981  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:51.996806  133241 cri.go:89] found id: ""
	I1210 01:09:51.996830  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.996838  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:51.996844  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:51.996901  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:52.028041  133241 cri.go:89] found id: ""
	I1210 01:09:52.028083  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.028091  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:52.028097  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:52.028150  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:52.057921  133241 cri.go:89] found id: ""
	I1210 01:09:52.057946  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.057954  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:52.057960  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:52.058010  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:52.088367  133241 cri.go:89] found id: ""
	I1210 01:09:52.088406  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.088415  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:52.088422  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:52.088487  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:52.117636  133241 cri.go:89] found id: ""
	I1210 01:09:52.117667  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.117679  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:52.117691  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:52.117705  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:52.151628  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:52.151655  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:52.202083  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:52.202116  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:52.214973  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:52.215009  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:52.282101  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:52.282126  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:52.282139  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:52.290617  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:54.790008  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:56.790504  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:53.820512  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:55.824852  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:53.258005  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:55.755992  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:54.862326  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:54.874349  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:54.874418  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:54.906983  133241 cri.go:89] found id: ""
	I1210 01:09:54.907006  133241 logs.go:282] 0 containers: []
	W1210 01:09:54.907013  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:54.907019  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:54.907069  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:54.938187  133241 cri.go:89] found id: ""
	I1210 01:09:54.938213  133241 logs.go:282] 0 containers: []
	W1210 01:09:54.938221  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:54.938226  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:54.938290  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:54.974481  133241 cri.go:89] found id: ""
	I1210 01:09:54.974514  133241 logs.go:282] 0 containers: []
	W1210 01:09:54.974526  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:54.974534  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:54.974619  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:55.005904  133241 cri.go:89] found id: ""
	I1210 01:09:55.005928  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.005941  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:55.005949  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:55.006015  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:55.037698  133241 cri.go:89] found id: ""
	I1210 01:09:55.037729  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.037741  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:55.037748  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:55.037816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:55.067926  133241 cri.go:89] found id: ""
	I1210 01:09:55.067958  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.067966  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:55.067971  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:55.068016  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:55.098309  133241 cri.go:89] found id: ""
	I1210 01:09:55.098333  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.098341  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:55.098349  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:55.098400  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:55.145177  133241 cri.go:89] found id: ""
	I1210 01:09:55.145212  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.145221  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:55.145231  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:55.145243  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:55.193307  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:55.193338  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:55.205536  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:55.205558  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:55.271248  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:55.271276  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:55.271295  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:55.349465  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:55.349503  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:57.887749  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:57.899698  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:57.899765  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:57.933170  133241 cri.go:89] found id: ""
	I1210 01:09:57.933196  133241 logs.go:282] 0 containers: []
	W1210 01:09:57.933206  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:57.933214  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:57.933282  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:57.964237  133241 cri.go:89] found id: ""
	I1210 01:09:57.964271  133241 logs.go:282] 0 containers: []
	W1210 01:09:57.964284  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:57.964292  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:57.964360  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:57.996447  133241 cri.go:89] found id: ""
	I1210 01:09:57.996481  133241 logs.go:282] 0 containers: []
	W1210 01:09:57.996493  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:57.996501  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:57.996562  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:58.030007  133241 cri.go:89] found id: ""
	I1210 01:09:58.030034  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.030046  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:58.030054  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:58.030120  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:58.063634  133241 cri.go:89] found id: ""
	I1210 01:09:58.063667  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.063678  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:58.063686  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:58.063748  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:58.095076  133241 cri.go:89] found id: ""
	I1210 01:09:58.095105  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.095114  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:58.095120  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:58.095177  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:58.127107  133241 cri.go:89] found id: ""
	I1210 01:09:58.127147  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.127160  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:58.127169  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:58.127243  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:58.158137  133241 cri.go:89] found id: ""
	I1210 01:09:58.158167  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.158177  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:58.158190  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:58.158213  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:58.209195  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:58.209236  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:58.221816  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:58.221841  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:58.290396  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:58.290416  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:58.290430  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:58.370235  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:58.370265  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:58.791561  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:01.290503  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:58.321571  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:00.322349  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:58.256526  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:00.756754  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:00.908076  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:00.920898  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:00.920985  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:00.955432  133241 cri.go:89] found id: ""
	I1210 01:10:00.955469  133241 logs.go:282] 0 containers: []
	W1210 01:10:00.955481  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:00.955490  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:00.955550  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:00.987580  133241 cri.go:89] found id: ""
	I1210 01:10:00.987606  133241 logs.go:282] 0 containers: []
	W1210 01:10:00.987615  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:00.987621  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:00.987670  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:01.018741  133241 cri.go:89] found id: ""
	I1210 01:10:01.018766  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.018773  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:01.018781  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:01.018840  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:01.049817  133241 cri.go:89] found id: ""
	I1210 01:10:01.049849  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.049860  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:01.049879  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:01.049946  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:01.081736  133241 cri.go:89] found id: ""
	I1210 01:10:01.081765  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.081775  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:01.081781  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:01.081829  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:01.110990  133241 cri.go:89] found id: ""
	I1210 01:10:01.111015  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.111026  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:01.111034  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:01.111096  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:01.140737  133241 cri.go:89] found id: ""
	I1210 01:10:01.140767  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.140777  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:01.140785  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:01.140848  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:01.170628  133241 cri.go:89] found id: ""
	I1210 01:10:01.170662  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.170674  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:01.170686  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:01.170701  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:01.222358  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:01.222389  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:01.235640  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:01.235668  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:01.302726  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:01.302745  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:01.302762  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:01.383817  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:01.383855  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:03.921112  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:03.933517  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:03.933592  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:03.967318  133241 cri.go:89] found id: ""
	I1210 01:10:03.967344  133241 logs.go:282] 0 containers: []
	W1210 01:10:03.967353  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:03.967358  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:03.967411  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:03.998743  133241 cri.go:89] found id: ""
	I1210 01:10:03.998768  133241 logs.go:282] 0 containers: []
	W1210 01:10:03.998776  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:03.998782  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:03.998842  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:04.033209  133241 cri.go:89] found id: ""
	I1210 01:10:04.033235  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.033247  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:04.033255  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:04.033319  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:04.064815  133241 cri.go:89] found id: ""
	I1210 01:10:04.064845  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.064857  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:04.064864  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:04.064921  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:04.098676  133241 cri.go:89] found id: ""
	I1210 01:10:04.098699  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.098707  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:04.098712  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:04.098763  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:04.129693  133241 cri.go:89] found id: ""
	I1210 01:10:04.129720  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.129732  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:04.129741  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:04.129809  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:04.162158  133241 cri.go:89] found id: ""
	I1210 01:10:04.162195  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.162203  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:04.162209  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:04.162276  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:04.194376  133241 cri.go:89] found id: ""
	I1210 01:10:04.194425  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.194436  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:04.194446  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:04.194462  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:04.246674  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:04.246702  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:04.259142  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:04.259169  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:04.330034  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:04.330054  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:04.330067  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:04.410042  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:04.410089  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:03.790690  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:06.290723  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:02.821628  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:04.822691  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:06.823821  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:03.256410  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:05.756520  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:06.948623  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:06.960727  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:06.960811  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:06.993176  133241 cri.go:89] found id: ""
	I1210 01:10:06.993217  133241 logs.go:282] 0 containers: []
	W1210 01:10:06.993226  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:06.993231  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:06.993285  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:07.026420  133241 cri.go:89] found id: ""
	I1210 01:10:07.026449  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.026462  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:07.026469  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:07.026541  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:07.060810  133241 cri.go:89] found id: ""
	I1210 01:10:07.060837  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.060847  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:07.060855  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:07.060921  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:07.091336  133241 cri.go:89] found id: ""
	I1210 01:10:07.091376  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.091386  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:07.091393  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:07.091510  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:07.122715  133241 cri.go:89] found id: ""
	I1210 01:10:07.122750  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.122762  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:07.122770  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:07.122822  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:07.154444  133241 cri.go:89] found id: ""
	I1210 01:10:07.154479  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.154490  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:07.154496  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:07.154575  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:07.189571  133241 cri.go:89] found id: ""
	I1210 01:10:07.189601  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.189614  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:07.189622  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:07.189683  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:07.224455  133241 cri.go:89] found id: ""
	I1210 01:10:07.224480  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.224489  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:07.224499  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:07.224512  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:07.240174  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:07.240214  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:07.344027  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:07.344062  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:07.344079  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:07.445219  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:07.445263  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:07.483205  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:07.483238  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:08.291335  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:10.789606  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:09.321098  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:11.321721  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:08.256670  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:10.256954  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:12.257117  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:10.034238  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:10.047042  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:10.047105  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:10.078622  133241 cri.go:89] found id: ""
	I1210 01:10:10.078654  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.078666  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:10.078675  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:10.078737  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:10.109353  133241 cri.go:89] found id: ""
	I1210 01:10:10.109379  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.109390  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:10.109398  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:10.109470  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:10.143036  133241 cri.go:89] found id: ""
	I1210 01:10:10.143065  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.143077  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:10.143084  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:10.143150  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:10.174938  133241 cri.go:89] found id: ""
	I1210 01:10:10.174966  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.174975  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:10.174981  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:10.175032  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:10.208680  133241 cri.go:89] found id: ""
	I1210 01:10:10.208709  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.208718  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:10.208724  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:10.208793  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:10.241153  133241 cri.go:89] found id: ""
	I1210 01:10:10.241189  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.241202  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:10.241213  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:10.241290  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:10.279405  133241 cri.go:89] found id: ""
	I1210 01:10:10.279437  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.279448  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:10.279457  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:10.279523  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:10.317915  133241 cri.go:89] found id: ""
	I1210 01:10:10.317943  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.317953  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:10.317964  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:10.317980  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:10.370920  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:10.370955  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:10.385823  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:10.385867  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:10.452746  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:10.452774  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:10.452793  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:10.535218  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:10.535291  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:13.075172  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:13.090707  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:13.090785  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:13.141780  133241 cri.go:89] found id: ""
	I1210 01:10:13.141804  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.141812  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:13.141818  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:13.141869  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:13.172241  133241 cri.go:89] found id: ""
	I1210 01:10:13.172263  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.172271  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:13.172277  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:13.172339  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:13.200378  133241 cri.go:89] found id: ""
	I1210 01:10:13.200401  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.200410  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:13.200415  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:13.200472  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:13.232921  133241 cri.go:89] found id: ""
	I1210 01:10:13.232952  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.232964  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:13.232972  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:13.233088  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:13.265305  133241 cri.go:89] found id: ""
	I1210 01:10:13.265333  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.265344  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:13.265352  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:13.265411  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:13.299192  133241 cri.go:89] found id: ""
	I1210 01:10:13.299216  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.299226  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:13.299233  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:13.299306  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:13.332156  133241 cri.go:89] found id: ""
	I1210 01:10:13.332184  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.332195  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:13.332202  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:13.332259  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:13.365450  133241 cri.go:89] found id: ""
	I1210 01:10:13.365484  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.365498  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:13.365511  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:13.365529  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:13.440807  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:13.440849  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:13.477283  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:13.477325  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:13.527481  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:13.527514  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:13.540146  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:13.540178  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:13.602711  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:12.790714  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:15.290963  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:13.820293  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:15.821845  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:14.755454  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:16.756574  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:16.103789  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:16.116124  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:16.116204  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:16.153057  133241 cri.go:89] found id: ""
	I1210 01:10:16.153082  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.153102  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:16.153109  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:16.153162  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:16.186489  133241 cri.go:89] found id: ""
	I1210 01:10:16.186517  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.186528  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:16.186535  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:16.186613  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:16.216369  133241 cri.go:89] found id: ""
	I1210 01:10:16.216404  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.216415  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:16.216423  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:16.216482  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:16.246254  133241 cri.go:89] found id: ""
	I1210 01:10:16.246282  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.246292  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:16.246299  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:16.246361  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:16.277815  133241 cri.go:89] found id: ""
	I1210 01:10:16.277844  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.277855  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:16.277866  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:16.277931  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:16.312101  133241 cri.go:89] found id: ""
	I1210 01:10:16.312132  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.312141  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:16.312147  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:16.312202  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:16.350273  133241 cri.go:89] found id: ""
	I1210 01:10:16.350299  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.350307  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:16.350313  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:16.350376  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:16.388091  133241 cri.go:89] found id: ""
	I1210 01:10:16.388113  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.388121  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:16.388130  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:16.388150  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:16.456039  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:16.456066  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:16.456085  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:16.534919  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:16.534950  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:16.581598  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:16.581639  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:16.631479  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:16.631515  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:19.143852  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:19.156229  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:19.156300  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:19.186482  133241 cri.go:89] found id: ""
	I1210 01:10:19.186506  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.186514  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:19.186521  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:19.186585  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:19.216945  133241 cri.go:89] found id: ""
	I1210 01:10:19.216967  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.216975  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:19.216983  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:19.217060  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:19.247628  133241 cri.go:89] found id: ""
	I1210 01:10:19.247656  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.247666  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:19.247672  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:19.247719  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:19.281256  133241 cri.go:89] found id: ""
	I1210 01:10:19.281287  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.281297  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:19.281303  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:19.281364  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:19.315123  133241 cri.go:89] found id: ""
	I1210 01:10:19.315156  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.315168  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:19.315176  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:19.315246  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:19.349687  133241 cri.go:89] found id: ""
	I1210 01:10:19.349714  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.349725  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:19.349733  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:19.349797  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:19.381019  133241 cri.go:89] found id: ""
	I1210 01:10:19.381046  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.381058  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:19.381065  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:19.381129  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:19.413983  133241 cri.go:89] found id: ""
	I1210 01:10:19.414023  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.414035  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:19.414048  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:19.414063  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:19.453812  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:19.453848  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:19.504016  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:19.504049  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:19.517665  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:19.517695  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:19.583777  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:19.583807  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:19.583825  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:17.790792  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:20.290934  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:17.821893  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:20.320787  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:19.256192  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:21.256740  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:22.160219  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:22.172908  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:22.172984  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:22.203634  133241 cri.go:89] found id: ""
	I1210 01:10:22.203665  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.203680  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:22.203689  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:22.203754  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:22.233632  133241 cri.go:89] found id: ""
	I1210 01:10:22.233660  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.233671  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:22.233679  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:22.233748  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:22.269679  133241 cri.go:89] found id: ""
	I1210 01:10:22.269704  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.269713  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:22.269719  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:22.269769  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:22.301819  133241 cri.go:89] found id: ""
	I1210 01:10:22.301850  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.301858  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:22.301864  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:22.301914  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:22.337435  133241 cri.go:89] found id: ""
	I1210 01:10:22.337470  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.337479  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:22.337494  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:22.337562  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:22.368920  133241 cri.go:89] found id: ""
	I1210 01:10:22.368944  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.368952  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:22.368957  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:22.369020  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:22.401157  133241 cri.go:89] found id: ""
	I1210 01:10:22.401188  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.401200  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:22.401211  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:22.401277  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:22.436278  133241 cri.go:89] found id: ""
	I1210 01:10:22.436317  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.436330  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:22.436343  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:22.436359  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:22.485320  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:22.485354  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:22.498225  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:22.498253  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:22.559918  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:22.559944  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:22.559961  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:22.636884  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:22.636919  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:22.291705  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:24.790056  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:26.790414  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:22.322051  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:24.821800  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:23.756797  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:25.757544  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:25.173302  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:25.185398  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:25.185481  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:25.215003  133241 cri.go:89] found id: ""
	I1210 01:10:25.215030  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.215038  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:25.215044  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:25.215106  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:25.247583  133241 cri.go:89] found id: ""
	I1210 01:10:25.247604  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.247613  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:25.247620  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:25.247679  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:25.282125  133241 cri.go:89] found id: ""
	I1210 01:10:25.282150  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.282158  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:25.282163  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:25.282220  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:25.317560  133241 cri.go:89] found id: ""
	I1210 01:10:25.317590  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.317599  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:25.317605  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:25.317666  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:25.354392  133241 cri.go:89] found id: ""
	I1210 01:10:25.354418  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.354430  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:25.354441  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:25.354510  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:25.392349  133241 cri.go:89] found id: ""
	I1210 01:10:25.392375  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.392384  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:25.392390  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:25.392442  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:25.429665  133241 cri.go:89] found id: ""
	I1210 01:10:25.429692  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.429702  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:25.429709  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:25.429766  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:25.466437  133241 cri.go:89] found id: ""
	I1210 01:10:25.466463  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.466476  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:25.466488  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:25.466503  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:25.480846  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:25.480885  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:25.548828  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:25.548861  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:25.548877  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:25.626942  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:25.626985  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:25.664081  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:25.664120  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:28.219032  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:28.233820  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:28.233886  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:28.267033  133241 cri.go:89] found id: ""
	I1210 01:10:28.267061  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.267072  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:28.267079  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:28.267133  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:28.304241  133241 cri.go:89] found id: ""
	I1210 01:10:28.304268  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.304276  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:28.304282  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:28.304329  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:28.339783  133241 cri.go:89] found id: ""
	I1210 01:10:28.339810  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.339817  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:28.339824  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:28.339897  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:28.371890  133241 cri.go:89] found id: ""
	I1210 01:10:28.371944  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.371957  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:28.371965  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:28.372033  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:28.409995  133241 cri.go:89] found id: ""
	I1210 01:10:28.410031  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.410042  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:28.410050  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:28.410122  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:28.443817  133241 cri.go:89] found id: ""
	I1210 01:10:28.443854  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.443866  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:28.443874  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:28.443943  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:28.476813  133241 cri.go:89] found id: ""
	I1210 01:10:28.476842  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.476850  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:28.476856  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:28.476918  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:28.509092  133241 cri.go:89] found id: ""
	I1210 01:10:28.509119  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.509129  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:28.509147  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:28.509166  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:28.582990  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:28.583021  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:28.624120  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:28.624152  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:28.673901  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:28.673942  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:28.686654  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:28.686684  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:28.754914  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:28.790925  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:31.291799  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:27.321458  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:29.820474  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:31.820865  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:28.257390  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:30.757194  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:31.256019  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:31.269297  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:31.269374  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:31.306032  133241 cri.go:89] found id: ""
	I1210 01:10:31.306063  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.306074  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:31.306082  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:31.306149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:31.339930  133241 cri.go:89] found id: ""
	I1210 01:10:31.339964  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.339976  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:31.339984  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:31.340049  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:31.371820  133241 cri.go:89] found id: ""
	I1210 01:10:31.371853  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.371865  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:31.371872  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:31.371929  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:31.406853  133241 cri.go:89] found id: ""
	I1210 01:10:31.406880  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.406888  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:31.406895  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:31.406973  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:31.441927  133241 cri.go:89] found id: ""
	I1210 01:10:31.441964  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.441983  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:31.441993  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:31.442059  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:31.475302  133241 cri.go:89] found id: ""
	I1210 01:10:31.475335  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.475347  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:31.475356  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:31.475422  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:31.508445  133241 cri.go:89] found id: ""
	I1210 01:10:31.508479  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.508489  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:31.508495  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:31.508549  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:31.542658  133241 cri.go:89] found id: ""
	I1210 01:10:31.542686  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.542694  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:31.542704  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:31.542720  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:31.591393  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:31.591432  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:31.604124  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:31.604152  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:31.670342  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:31.670381  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:31.670401  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:31.755216  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:31.755273  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:34.307218  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:34.321878  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:34.321951  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:34.355191  133241 cri.go:89] found id: ""
	I1210 01:10:34.355230  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.355238  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:34.355244  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:34.355300  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:34.392397  133241 cri.go:89] found id: ""
	I1210 01:10:34.392432  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.392445  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:34.392453  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:34.392522  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:34.424468  133241 cri.go:89] found id: ""
	I1210 01:10:34.424496  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.424513  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:34.424519  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:34.424568  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:34.456966  133241 cri.go:89] found id: ""
	I1210 01:10:34.456990  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.457000  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:34.457006  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:34.457057  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:34.491830  133241 cri.go:89] found id: ""
	I1210 01:10:34.491863  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.491874  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:34.491882  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:34.491949  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:34.523409  133241 cri.go:89] found id: ""
	I1210 01:10:34.523441  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.523455  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:34.523464  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:34.523520  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:34.555092  133241 cri.go:89] found id: ""
	I1210 01:10:34.555125  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.555136  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:34.555143  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:34.555211  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:34.585491  133241 cri.go:89] found id: ""
	I1210 01:10:34.585521  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.585530  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:34.585540  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:34.585553  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:34.598250  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:34.598281  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:10:33.790899  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:35.791148  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:34.321870  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:36.821430  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:32.757323  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:35.256735  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:37.257310  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	W1210 01:10:34.662759  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:34.662784  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:34.662797  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:34.740495  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:34.740537  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:34.777192  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:34.777231  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:37.329212  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:37.342322  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:37.342397  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:37.374083  133241 cri.go:89] found id: ""
	I1210 01:10:37.374114  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.374124  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:37.374133  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:37.374202  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:37.404838  133241 cri.go:89] found id: ""
	I1210 01:10:37.404872  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.404880  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:37.404886  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:37.404948  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:37.439471  133241 cri.go:89] found id: ""
	I1210 01:10:37.439503  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.439515  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:37.439523  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:37.439598  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:37.473725  133241 cri.go:89] found id: ""
	I1210 01:10:37.473756  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.473765  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:37.473770  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:37.473822  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:37.507449  133241 cri.go:89] found id: ""
	I1210 01:10:37.507478  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.507491  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:37.507498  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:37.507565  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:37.538432  133241 cri.go:89] found id: ""
	I1210 01:10:37.538468  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.538479  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:37.538490  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:37.538583  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:37.571690  133241 cri.go:89] found id: ""
	I1210 01:10:37.571716  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.571724  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:37.571730  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:37.571787  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:37.606988  133241 cri.go:89] found id: ""
	I1210 01:10:37.607017  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.607026  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:37.607036  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:37.607048  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:37.655260  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:37.655290  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:37.667647  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:37.667672  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:37.734898  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:37.734955  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:37.734971  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:37.823654  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:37.823690  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:37.792020  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:40.290220  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:39.323412  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:41.822486  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:39.759358  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:42.256854  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:40.361513  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:40.374995  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:40.375054  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:40.407043  133241 cri.go:89] found id: ""
	I1210 01:10:40.407077  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.407086  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:40.407091  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:40.407146  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:40.438613  133241 cri.go:89] found id: ""
	I1210 01:10:40.438644  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.438655  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:40.438663  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:40.438725  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:40.468747  133241 cri.go:89] found id: ""
	I1210 01:10:40.468781  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.468794  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:40.468801  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:40.468873  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:40.501670  133241 cri.go:89] found id: ""
	I1210 01:10:40.501700  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.501708  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:40.501714  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:40.501762  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:40.531671  133241 cri.go:89] found id: ""
	I1210 01:10:40.531694  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.531704  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:40.531712  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:40.531769  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:40.562804  133241 cri.go:89] found id: ""
	I1210 01:10:40.562827  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.562836  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:40.562847  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:40.562909  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:40.593286  133241 cri.go:89] found id: ""
	I1210 01:10:40.593309  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.593318  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:40.593323  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:40.593369  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:40.624387  133241 cri.go:89] found id: ""
	I1210 01:10:40.624424  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.624438  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:40.624452  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:40.624479  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:40.636616  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:40.636643  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:40.703044  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:40.703071  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:40.703089  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:40.782186  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:40.782220  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:40.824410  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:40.824434  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:43.377460  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:43.391624  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:43.391704  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:43.424454  133241 cri.go:89] found id: ""
	I1210 01:10:43.424489  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.424499  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:43.424505  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:43.424570  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:43.454067  133241 cri.go:89] found id: ""
	I1210 01:10:43.454094  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.454102  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:43.454108  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:43.454160  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:43.485905  133241 cri.go:89] found id: ""
	I1210 01:10:43.485938  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.485949  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:43.485956  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:43.486021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:43.516402  133241 cri.go:89] found id: ""
	I1210 01:10:43.516427  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.516435  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:43.516447  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:43.516521  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:43.549049  133241 cri.go:89] found id: ""
	I1210 01:10:43.549102  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.549114  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:43.549124  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:43.549181  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:43.582610  133241 cri.go:89] found id: ""
	I1210 01:10:43.582641  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.582652  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:43.582661  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:43.582720  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:43.614392  133241 cri.go:89] found id: ""
	I1210 01:10:43.614424  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.614435  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:43.614442  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:43.614507  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:43.646797  133241 cri.go:89] found id: ""
	I1210 01:10:43.646830  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.646842  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:43.646855  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:43.646872  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:43.682884  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:43.682921  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:43.739117  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:43.739159  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:43.754008  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:43.754047  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:43.825110  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:43.825140  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:43.825156  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:42.290697  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:44.790711  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.791942  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:44.321563  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.821954  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:44.756178  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.757399  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.401040  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:46.414417  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:46.414515  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:46.446832  133241 cri.go:89] found id: ""
	I1210 01:10:46.446861  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.446871  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:46.446879  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:46.446945  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:46.480534  133241 cri.go:89] found id: ""
	I1210 01:10:46.480566  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.480577  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:46.480584  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:46.480649  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:46.512706  133241 cri.go:89] found id: ""
	I1210 01:10:46.512735  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.512745  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:46.512752  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:46.512818  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:46.545769  133241 cri.go:89] found id: ""
	I1210 01:10:46.545803  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.545815  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:46.545823  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:46.545889  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:46.575715  133241 cri.go:89] found id: ""
	I1210 01:10:46.575750  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.575762  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:46.575769  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:46.575834  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:46.605133  133241 cri.go:89] found id: ""
	I1210 01:10:46.605164  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.605175  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:46.605183  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:46.605235  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:46.635536  133241 cri.go:89] found id: ""
	I1210 01:10:46.635571  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.635582  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:46.635589  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:46.635650  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:46.665579  133241 cri.go:89] found id: ""
	I1210 01:10:46.665608  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.665617  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:46.665627  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:46.665637  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:46.749766  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:46.749806  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:46.788690  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:46.788725  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:46.841860  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:46.841888  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:46.870621  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:46.870651  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:46.943532  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:49.444707  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:49.457003  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:49.457071  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:49.489757  133241 cri.go:89] found id: ""
	I1210 01:10:49.489791  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.489802  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:49.489809  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:49.489859  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:49.519808  133241 cri.go:89] found id: ""
	I1210 01:10:49.519832  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.519839  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:49.519844  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:49.519895  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:49.552725  133241 cri.go:89] found id: ""
	I1210 01:10:49.552748  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.552756  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:49.552762  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:49.552816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:49.583657  133241 cri.go:89] found id: ""
	I1210 01:10:49.583686  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.583699  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:49.583710  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:49.583771  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:49.614520  133241 cri.go:89] found id: ""
	I1210 01:10:49.614547  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.614569  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:49.614579  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:49.614644  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:49.290385  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:51.291504  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:49.321277  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:51.321612  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:49.256723  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:51.257348  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:49.646739  133241 cri.go:89] found id: ""
	I1210 01:10:49.646788  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.646800  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:49.646811  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:49.646871  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:49.680156  133241 cri.go:89] found id: ""
	I1210 01:10:49.680184  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.680195  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:49.680203  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:49.680271  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:49.711052  133241 cri.go:89] found id: ""
	I1210 01:10:49.711090  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.711103  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:49.711115  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:49.711133  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:49.765139  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:49.765173  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:49.777581  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:49.777612  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:49.842857  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:49.842882  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:49.842897  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:49.923492  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:49.923529  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:52.465282  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:52.478468  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:52.478535  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:52.514379  133241 cri.go:89] found id: ""
	I1210 01:10:52.514411  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.514420  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:52.514426  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:52.514481  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:52.545952  133241 cri.go:89] found id: ""
	I1210 01:10:52.545981  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.545991  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:52.545999  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:52.546063  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:52.581959  133241 cri.go:89] found id: ""
	I1210 01:10:52.581986  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.581995  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:52.582003  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:52.582109  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:52.634648  133241 cri.go:89] found id: ""
	I1210 01:10:52.634674  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.634686  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:52.634693  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:52.634753  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:52.668485  133241 cri.go:89] found id: ""
	I1210 01:10:52.668509  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.668518  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:52.668524  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:52.668587  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:52.702030  133241 cri.go:89] found id: ""
	I1210 01:10:52.702058  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.702067  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:52.702074  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:52.702139  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:52.736618  133241 cri.go:89] found id: ""
	I1210 01:10:52.736647  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.736655  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:52.736661  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:52.736728  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:52.769400  133241 cri.go:89] found id: ""
	I1210 01:10:52.769427  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.769436  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:52.769444  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:52.769462  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:52.808900  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:52.808936  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:52.861032  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:52.861067  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:52.874251  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:52.874281  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:52.946117  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:52.946145  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:52.946174  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:53.790452  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:55.791486  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:53.820716  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:55.822118  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:53.756664  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:56.255828  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:55.526812  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:55.541146  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:55.541232  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:55.582382  133241 cri.go:89] found id: ""
	I1210 01:10:55.582414  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.582424  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:55.582430  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:55.582483  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:55.620756  133241 cri.go:89] found id: ""
	I1210 01:10:55.620781  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.620790  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:55.620795  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:55.620865  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:55.657136  133241 cri.go:89] found id: ""
	I1210 01:10:55.657173  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.657184  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:55.657192  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:55.657253  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:55.691809  133241 cri.go:89] found id: ""
	I1210 01:10:55.691836  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.691844  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:55.691850  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:55.691901  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:55.725747  133241 cri.go:89] found id: ""
	I1210 01:10:55.725782  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.725794  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:55.725802  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:55.725870  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:55.758656  133241 cri.go:89] found id: ""
	I1210 01:10:55.758686  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.758697  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:55.758704  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:55.758766  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:55.791407  133241 cri.go:89] found id: ""
	I1210 01:10:55.791437  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.791447  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:55.791453  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:55.791522  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:55.823238  133241 cri.go:89] found id: ""
	I1210 01:10:55.823259  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.823269  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:55.823277  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:55.823288  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:55.858051  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:55.858090  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:55.910896  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:55.910928  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:55.923792  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:55.923814  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:55.994264  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:55.994283  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:55.994297  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:58.570410  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:58.582632  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:58.582709  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:58.614706  133241 cri.go:89] found id: ""
	I1210 01:10:58.614741  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.614752  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:58.614759  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:58.614820  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:58.645853  133241 cri.go:89] found id: ""
	I1210 01:10:58.645880  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.645888  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:58.645893  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:58.645946  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:58.681278  133241 cri.go:89] found id: ""
	I1210 01:10:58.681305  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.681313  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:58.681319  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:58.681376  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:58.715312  133241 cri.go:89] found id: ""
	I1210 01:10:58.715344  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.715356  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:58.715364  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:58.715434  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:58.753150  133241 cri.go:89] found id: ""
	I1210 01:10:58.753182  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.753193  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:58.753201  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:58.753275  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:58.792337  133241 cri.go:89] found id: ""
	I1210 01:10:58.792363  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.792371  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:58.792377  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:58.792424  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:58.824538  133241 cri.go:89] found id: ""
	I1210 01:10:58.824562  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.824569  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:58.824575  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:58.824626  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:58.859699  133241 cri.go:89] found id: ""
	I1210 01:10:58.859733  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.859745  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:58.859755  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:58.859768  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:58.874557  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:58.874607  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:58.942377  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:58.942399  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:58.942413  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:59.020700  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:59.020743  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:59.092780  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:59.092820  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:58.290069  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:00.290277  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:58.321783  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:00.820779  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:58.256816  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:00.756307  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:01.656942  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:01.670706  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:01.670790  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:01.704182  133241 cri.go:89] found id: ""
	I1210 01:11:01.704222  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.704235  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:01.704242  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:01.704295  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:01.737176  133241 cri.go:89] found id: ""
	I1210 01:11:01.737207  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.737216  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:01.737222  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:01.737279  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:01.771891  133241 cri.go:89] found id: ""
	I1210 01:11:01.771924  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.771935  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:01.771943  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:01.772001  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:01.804964  133241 cri.go:89] found id: ""
	I1210 01:11:01.804994  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.805005  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:01.805026  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:01.805101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:01.837156  133241 cri.go:89] found id: ""
	I1210 01:11:01.837184  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.837195  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:01.837203  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:01.837260  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:01.866759  133241 cri.go:89] found id: ""
	I1210 01:11:01.866783  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.866793  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:01.866802  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:01.866868  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:01.897349  133241 cri.go:89] found id: ""
	I1210 01:11:01.897377  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.897387  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:01.897394  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:01.897452  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:01.928390  133241 cri.go:89] found id: ""
	I1210 01:11:01.928419  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.928430  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:01.928442  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:01.928462  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:01.995531  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:01.995558  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:01.995572  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:02.073144  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:02.073178  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:02.107235  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:02.107266  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:02.159959  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:02.159993  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:02.789938  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:04.790544  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:02.821058  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:04.822126  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:02.756968  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:05.255943  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:07.256779  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:04.672775  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:04.686495  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:04.686604  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:04.720867  133241 cri.go:89] found id: ""
	I1210 01:11:04.720977  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.721005  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:04.721034  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:04.721143  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:04.757796  133241 cri.go:89] found id: ""
	I1210 01:11:04.757823  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.757831  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:04.757837  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:04.757896  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:04.799823  133241 cri.go:89] found id: ""
	I1210 01:11:04.799848  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.799856  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:04.799861  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:04.799921  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:04.848259  133241 cri.go:89] found id: ""
	I1210 01:11:04.848291  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.848303  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:04.848312  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:04.848392  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:04.898530  133241 cri.go:89] found id: ""
	I1210 01:11:04.898583  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.898596  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:04.898605  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:04.898673  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:04.935954  133241 cri.go:89] found id: ""
	I1210 01:11:04.935979  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.935987  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:04.935992  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:04.936037  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:04.970503  133241 cri.go:89] found id: ""
	I1210 01:11:04.970531  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.970538  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:04.970544  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:04.970627  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:05.003257  133241 cri.go:89] found id: ""
	I1210 01:11:05.003280  133241 logs.go:282] 0 containers: []
	W1210 01:11:05.003289  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:05.003298  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:05.003311  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:05.053816  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:05.053849  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:05.066024  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:05.066056  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:05.129515  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:05.129542  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:05.129559  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:05.203823  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:05.203861  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:07.743773  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:07.756948  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:07.757021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:07.790298  133241 cri.go:89] found id: ""
	I1210 01:11:07.790326  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.790334  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:07.790341  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:07.790432  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:07.822653  133241 cri.go:89] found id: ""
	I1210 01:11:07.822682  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.822693  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:07.822700  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:07.822754  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:07.856125  133241 cri.go:89] found id: ""
	I1210 01:11:07.856160  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.856171  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:07.856178  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:07.856247  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:07.888297  133241 cri.go:89] found id: ""
	I1210 01:11:07.888321  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.888329  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:07.888336  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:07.888394  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:07.919131  133241 cri.go:89] found id: ""
	I1210 01:11:07.919159  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.919170  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:07.919177  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:07.919245  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:07.954289  133241 cri.go:89] found id: ""
	I1210 01:11:07.954320  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.954332  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:07.954340  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:07.954396  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:07.985447  133241 cri.go:89] found id: ""
	I1210 01:11:07.985482  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.985497  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:07.985505  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:07.985560  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:08.016461  133241 cri.go:89] found id: ""
	I1210 01:11:08.016491  133241 logs.go:282] 0 containers: []
	W1210 01:11:08.016504  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:08.016516  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:08.016534  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:08.051346  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:08.051386  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:08.101708  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:08.101741  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:08.113883  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:08.113912  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:08.174656  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:08.174681  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:08.174696  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:07.289462  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:09.290707  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:11.790555  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:07.322137  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:09.821004  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:11.821064  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:09.757877  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:12.256156  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:10.751754  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:10.768007  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:10.768071  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:10.814141  133241 cri.go:89] found id: ""
	I1210 01:11:10.814167  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.814177  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:10.814187  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:10.814255  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:10.864355  133241 cri.go:89] found id: ""
	I1210 01:11:10.864379  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.864387  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:10.864392  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:10.864464  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:10.917533  133241 cri.go:89] found id: ""
	I1210 01:11:10.917563  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.917572  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:10.917579  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:10.917644  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:10.949555  133241 cri.go:89] found id: ""
	I1210 01:11:10.949589  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.949601  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:10.949609  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:10.949668  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:10.982997  133241 cri.go:89] found id: ""
	I1210 01:11:10.983022  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.983030  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:10.983036  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:10.983101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:11.016318  133241 cri.go:89] found id: ""
	I1210 01:11:11.016348  133241 logs.go:282] 0 containers: []
	W1210 01:11:11.016359  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:11.016366  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:11.016460  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:11.045980  133241 cri.go:89] found id: ""
	I1210 01:11:11.046004  133241 logs.go:282] 0 containers: []
	W1210 01:11:11.046012  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:11.046018  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:11.046067  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:11.074303  133241 cri.go:89] found id: ""
	I1210 01:11:11.074329  133241 logs.go:282] 0 containers: []
	W1210 01:11:11.074336  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:11.074346  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:11.074357  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:11.108874  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:11.108907  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:11.156642  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:11.156672  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:11.168505  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:11.168527  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:11.239949  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:11.239976  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:11.239994  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:13.828538  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:13.841876  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:13.841929  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:13.872854  133241 cri.go:89] found id: ""
	I1210 01:11:13.872884  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.872896  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:13.872904  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:13.872955  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:13.903759  133241 cri.go:89] found id: ""
	I1210 01:11:13.903790  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.903803  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:13.903812  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:13.903877  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:13.938898  133241 cri.go:89] found id: ""
	I1210 01:11:13.938921  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.938929  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:13.938934  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:13.938992  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:13.979322  133241 cri.go:89] found id: ""
	I1210 01:11:13.979343  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.979351  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:13.979358  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:13.979419  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:14.012959  133241 cri.go:89] found id: ""
	I1210 01:11:14.012984  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.012993  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:14.012999  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:14.013048  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:14.050248  133241 cri.go:89] found id: ""
	I1210 01:11:14.050274  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.050282  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:14.050288  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:14.050337  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:14.086029  133241 cri.go:89] found id: ""
	I1210 01:11:14.086061  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.086072  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:14.086080  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:14.086149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:14.119966  133241 cri.go:89] found id: ""
	I1210 01:11:14.119994  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.120002  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:14.120012  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:14.120025  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:14.133378  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:14.133406  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:14.199060  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:14.199093  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:14.199108  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:14.282056  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:14.282089  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:14.321155  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:14.321182  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:13.790898  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.290292  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:13.821872  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.320917  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:14.257094  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.755448  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.871040  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:16.882350  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:16.882417  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:16.911877  133241 cri.go:89] found id: ""
	I1210 01:11:16.911910  133241 logs.go:282] 0 containers: []
	W1210 01:11:16.911922  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:16.911930  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:16.911993  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:16.946898  133241 cri.go:89] found id: ""
	I1210 01:11:16.946931  133241 logs.go:282] 0 containers: []
	W1210 01:11:16.946945  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:16.946952  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:16.947021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:16.979154  133241 cri.go:89] found id: ""
	I1210 01:11:16.979185  133241 logs.go:282] 0 containers: []
	W1210 01:11:16.979196  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:16.979209  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:16.979293  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:17.008977  133241 cri.go:89] found id: ""
	I1210 01:11:17.009010  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.009021  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:17.009028  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:17.009093  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:17.041399  133241 cri.go:89] found id: ""
	I1210 01:11:17.041431  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.041440  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:17.041446  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:17.041505  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:17.074254  133241 cri.go:89] found id: ""
	I1210 01:11:17.074284  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.074295  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:17.074305  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:17.074385  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:17.104982  133241 cri.go:89] found id: ""
	I1210 01:11:17.105015  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.105025  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:17.105033  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:17.105094  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:17.135240  133241 cri.go:89] found id: ""
	I1210 01:11:17.135265  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.135275  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:17.135286  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:17.135298  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:17.186952  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:17.187004  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:17.201444  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:17.201472  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:17.272210  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:17.272229  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:17.272245  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:17.355218  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:17.355256  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:18.290407  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:20.292289  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:18.321390  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:20.321550  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:18.756823  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:21.256882  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:19.892863  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:19.905069  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:19.905138  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:19.943515  133241 cri.go:89] found id: ""
	I1210 01:11:19.943544  133241 logs.go:282] 0 containers: []
	W1210 01:11:19.943557  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:19.943566  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:19.943629  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:19.974474  133241 cri.go:89] found id: ""
	I1210 01:11:19.974499  133241 logs.go:282] 0 containers: []
	W1210 01:11:19.974509  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:19.974517  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:19.974597  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:20.008980  133241 cri.go:89] found id: ""
	I1210 01:11:20.009011  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.009023  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:20.009030  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:20.009097  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:20.040655  133241 cri.go:89] found id: ""
	I1210 01:11:20.040681  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.040690  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:20.040696  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:20.040745  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:20.073761  133241 cri.go:89] found id: ""
	I1210 01:11:20.073788  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.073799  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:20.073806  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:20.073873  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:20.104381  133241 cri.go:89] found id: ""
	I1210 01:11:20.104410  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.104421  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:20.104429  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:20.104489  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:20.138130  133241 cri.go:89] found id: ""
	I1210 01:11:20.138158  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.138167  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:20.138173  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:20.138229  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:20.166883  133241 cri.go:89] found id: ""
	I1210 01:11:20.166908  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.166916  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:20.166926  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:20.166940  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:20.199437  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:20.199470  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:20.247384  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:20.247418  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:20.260363  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:20.260392  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:20.330260  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:20.330283  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:20.330299  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:22.912818  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:22.925241  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:22.925316  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:22.957975  133241 cri.go:89] found id: ""
	I1210 01:11:22.958003  133241 logs.go:282] 0 containers: []
	W1210 01:11:22.958015  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:22.958023  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:22.958087  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:22.991067  133241 cri.go:89] found id: ""
	I1210 01:11:22.991098  133241 logs.go:282] 0 containers: []
	W1210 01:11:22.991109  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:22.991117  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:22.991177  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:23.022191  133241 cri.go:89] found id: ""
	I1210 01:11:23.022280  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.022297  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:23.022307  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:23.022373  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:23.055399  133241 cri.go:89] found id: ""
	I1210 01:11:23.055427  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.055435  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:23.055440  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:23.055504  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:23.085084  133241 cri.go:89] found id: ""
	I1210 01:11:23.085114  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.085126  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:23.085133  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:23.085195  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:23.114896  133241 cri.go:89] found id: ""
	I1210 01:11:23.114921  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.114929  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:23.114935  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:23.114995  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:23.146419  133241 cri.go:89] found id: ""
	I1210 01:11:23.146450  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.146463  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:23.146470  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:23.146546  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:23.178747  133241 cri.go:89] found id: ""
	I1210 01:11:23.178774  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.178782  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:23.178792  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:23.178804  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:23.230574  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:23.230609  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:23.242622  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:23.242649  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:23.315830  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:23.315850  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:23.315862  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:23.394054  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:23.394091  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:22.790004  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:24.790395  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:26.790583  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:22.821008  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:25.321294  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:23.758460  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:26.257243  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:25.930799  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:25.943287  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:25.943351  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:25.975836  133241 cri.go:89] found id: ""
	I1210 01:11:25.975866  133241 logs.go:282] 0 containers: []
	W1210 01:11:25.975877  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:25.975884  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:25.975948  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:26.008518  133241 cri.go:89] found id: ""
	I1210 01:11:26.008545  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.008553  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:26.008560  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:26.008607  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:26.041953  133241 cri.go:89] found id: ""
	I1210 01:11:26.041992  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.042002  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:26.042009  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:26.042076  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:26.071782  133241 cri.go:89] found id: ""
	I1210 01:11:26.071809  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.071821  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:26.071829  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:26.071894  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:26.101051  133241 cri.go:89] found id: ""
	I1210 01:11:26.101075  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.101084  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:26.101089  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:26.101135  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:26.135274  133241 cri.go:89] found id: ""
	I1210 01:11:26.135300  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.135308  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:26.135315  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:26.135368  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:26.168190  133241 cri.go:89] found id: ""
	I1210 01:11:26.168216  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.168224  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:26.168230  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:26.168293  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:26.198453  133241 cri.go:89] found id: ""
	I1210 01:11:26.198482  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.198492  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:26.198505  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:26.198524  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:26.211436  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:26.211460  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:26.273940  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:26.273964  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:26.273980  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:26.353198  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:26.353232  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:26.389823  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:26.389857  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:28.940375  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:28.952619  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:28.952676  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:28.984886  133241 cri.go:89] found id: ""
	I1210 01:11:28.984914  133241 logs.go:282] 0 containers: []
	W1210 01:11:28.984923  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:28.984929  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:28.984978  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:29.015424  133241 cri.go:89] found id: ""
	I1210 01:11:29.015453  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.015463  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:29.015469  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:29.015520  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:29.045941  133241 cri.go:89] found id: ""
	I1210 01:11:29.045977  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.045989  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:29.045997  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:29.046065  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:29.077346  133241 cri.go:89] found id: ""
	I1210 01:11:29.077375  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.077384  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:29.077389  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:29.077442  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:29.109825  133241 cri.go:89] found id: ""
	I1210 01:11:29.109861  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.109873  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:29.109880  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:29.109946  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:29.141601  133241 cri.go:89] found id: ""
	I1210 01:11:29.141633  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.141645  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:29.141656  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:29.141720  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:29.172711  133241 cri.go:89] found id: ""
	I1210 01:11:29.172747  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.172758  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:29.172766  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:29.172830  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:29.205247  133241 cri.go:89] found id: ""
	I1210 01:11:29.205272  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.205283  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:29.205296  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:29.205310  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:29.255917  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:29.255954  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:29.269246  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:29.269276  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:29.339509  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:29.339535  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:29.339550  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:29.414320  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:29.414358  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:29.291191  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:31.790102  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:27.820810  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:30.321256  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:28.756034  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:30.757633  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:31.950667  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:31.963020  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:31.963083  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:31.994537  133241 cri.go:89] found id: ""
	I1210 01:11:31.994586  133241 logs.go:282] 0 containers: []
	W1210 01:11:31.994598  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:31.994606  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:31.994672  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:32.028601  133241 cri.go:89] found id: ""
	I1210 01:11:32.028632  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.028643  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:32.028651  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:32.028710  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:32.060238  133241 cri.go:89] found id: ""
	I1210 01:11:32.060265  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.060273  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:32.060280  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:32.060344  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:32.094421  133241 cri.go:89] found id: ""
	I1210 01:11:32.094446  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.094454  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:32.094460  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:32.094509  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:32.128237  133241 cri.go:89] found id: ""
	I1210 01:11:32.128266  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.128277  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:32.128285  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:32.128355  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:32.163139  133241 cri.go:89] found id: ""
	I1210 01:11:32.163163  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.163172  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:32.163179  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:32.163237  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:32.194077  133241 cri.go:89] found id: ""
	I1210 01:11:32.194108  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.194119  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:32.194126  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:32.194187  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:32.224914  133241 cri.go:89] found id: ""
	I1210 01:11:32.224941  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.224952  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:32.224964  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:32.224980  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:32.275194  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:32.275230  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:32.287642  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:32.287670  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:32.350922  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:32.350953  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:32.350971  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:32.431573  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:32.431610  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:33.790816  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:35.791330  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:32.321475  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:34.823056  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:33.256524  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:35.755851  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:34.969741  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:34.982487  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:34.982541  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:35.015370  133241 cri.go:89] found id: ""
	I1210 01:11:35.015408  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.015419  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:35.015428  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:35.015494  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:35.047381  133241 cri.go:89] found id: ""
	I1210 01:11:35.047418  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.047430  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:35.047437  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:35.047501  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:35.077282  133241 cri.go:89] found id: ""
	I1210 01:11:35.077305  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.077314  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:35.077320  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:35.077380  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:35.107625  133241 cri.go:89] found id: ""
	I1210 01:11:35.107653  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.107664  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:35.107671  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:35.107723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:35.137919  133241 cri.go:89] found id: ""
	I1210 01:11:35.137949  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.137962  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:35.137970  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:35.138037  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:35.170914  133241 cri.go:89] found id: ""
	I1210 01:11:35.170939  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.170947  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:35.170962  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:35.171021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:35.201719  133241 cri.go:89] found id: ""
	I1210 01:11:35.201747  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.201755  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:35.201761  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:35.201821  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:35.230544  133241 cri.go:89] found id: ""
	I1210 01:11:35.230582  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.230595  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:35.230607  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:35.230622  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:35.243184  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:35.243210  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:35.311888  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:35.311915  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:35.311931  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:35.387377  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:35.387411  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:35.424087  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:35.424121  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:37.977530  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:37.989741  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:37.989811  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:38.023765  133241 cri.go:89] found id: ""
	I1210 01:11:38.023789  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.023799  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:38.023808  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:38.023871  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:38.060456  133241 cri.go:89] found id: ""
	I1210 01:11:38.060487  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.060498  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:38.060505  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:38.060558  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:38.092589  133241 cri.go:89] found id: ""
	I1210 01:11:38.092612  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.092620  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:38.092626  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:38.092676  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:38.126075  133241 cri.go:89] found id: ""
	I1210 01:11:38.126115  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.126127  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:38.126137  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:38.126216  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:38.158861  133241 cri.go:89] found id: ""
	I1210 01:11:38.158892  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.158905  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:38.158911  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:38.158966  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:38.189136  133241 cri.go:89] found id: ""
	I1210 01:11:38.189164  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.189172  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:38.189178  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:38.189227  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:38.220497  133241 cri.go:89] found id: ""
	I1210 01:11:38.220522  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.220530  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:38.220536  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:38.220585  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:38.253480  133241 cri.go:89] found id: ""
	I1210 01:11:38.253515  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.253527  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:38.253539  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:38.253554  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:38.334967  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:38.335006  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:38.375521  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:38.375551  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:38.429375  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:38.429419  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:38.442488  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:38.442527  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:38.504243  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:38.290594  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:40.290705  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:37.322067  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:39.822004  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:37.756517  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:40.256112  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:42.256624  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:41.005015  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:41.018073  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:41.018149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:41.049377  133241 cri.go:89] found id: ""
	I1210 01:11:41.049409  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.049421  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:41.049429  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:41.049495  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:41.080430  133241 cri.go:89] found id: ""
	I1210 01:11:41.080466  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.080476  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:41.080482  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:41.080543  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:41.113179  133241 cri.go:89] found id: ""
	I1210 01:11:41.113210  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.113222  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:41.113229  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:41.113298  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:41.144493  133241 cri.go:89] found id: ""
	I1210 01:11:41.144523  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.144535  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:41.144545  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:41.144612  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:41.174786  133241 cri.go:89] found id: ""
	I1210 01:11:41.174818  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.174828  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:41.174835  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:41.174903  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:41.205010  133241 cri.go:89] found id: ""
	I1210 01:11:41.205050  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.205063  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:41.205072  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:41.205142  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:41.236095  133241 cri.go:89] found id: ""
	I1210 01:11:41.236120  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.236131  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:41.236138  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:41.236200  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:41.267610  133241 cri.go:89] found id: ""
	I1210 01:11:41.267639  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.267654  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:41.267665  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:41.267681  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:41.302639  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:41.302669  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:41.352311  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:41.352343  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:41.365111  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:41.365140  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:41.434174  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:41.434197  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:41.434214  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:44.018219  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:44.030886  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:44.030961  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:44.072932  133241 cri.go:89] found id: ""
	I1210 01:11:44.072954  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.072962  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:44.072968  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:44.073015  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:44.110425  133241 cri.go:89] found id: ""
	I1210 01:11:44.110456  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.110466  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:44.110473  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:44.110539  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:44.148811  133241 cri.go:89] found id: ""
	I1210 01:11:44.148837  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.148848  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:44.148855  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:44.148922  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:44.184181  133241 cri.go:89] found id: ""
	I1210 01:11:44.184205  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.184213  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:44.184219  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:44.184268  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:44.213545  133241 cri.go:89] found id: ""
	I1210 01:11:44.213578  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.213590  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:44.213597  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:44.213658  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:44.246979  133241 cri.go:89] found id: ""
	I1210 01:11:44.247012  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.247024  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:44.247032  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:44.247095  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:44.280902  133241 cri.go:89] found id: ""
	I1210 01:11:44.280939  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.280950  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:44.280958  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:44.281035  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:44.310824  133241 cri.go:89] found id: ""
	I1210 01:11:44.310848  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.310859  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:44.310870  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:44.310887  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:44.389324  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:44.389354  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:44.425351  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:44.425388  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:44.478151  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:44.478197  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:44.491139  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:44.491171  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:44.552150  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:42.790792  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:45.289730  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:42.321108  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:44.321367  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:46.820868  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:44.258348  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:46.756838  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:47.052917  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:47.065698  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:47.065764  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:47.098483  133241 cri.go:89] found id: ""
	I1210 01:11:47.098518  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.098530  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:47.098538  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:47.098617  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:47.129042  133241 cri.go:89] found id: ""
	I1210 01:11:47.129073  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.129082  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:47.129088  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:47.129157  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:47.160050  133241 cri.go:89] found id: ""
	I1210 01:11:47.160083  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.160094  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:47.160101  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:47.160167  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:47.190078  133241 cri.go:89] found id: ""
	I1210 01:11:47.190111  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.190120  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:47.190126  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:47.190180  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:47.218975  133241 cri.go:89] found id: ""
	I1210 01:11:47.219007  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.219020  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:47.219028  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:47.219088  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:47.248644  133241 cri.go:89] found id: ""
	I1210 01:11:47.248679  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.248689  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:47.248694  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:47.248743  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:47.284306  133241 cri.go:89] found id: ""
	I1210 01:11:47.284332  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.284339  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:47.284345  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:47.284394  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:47.314682  133241 cri.go:89] found id: ""
	I1210 01:11:47.314704  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.314712  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:47.314721  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:47.314733  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:47.365334  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:47.365364  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:47.378184  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:47.378215  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:47.445591  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:47.445619  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:47.445642  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:47.523176  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:47.523214  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:47.291212  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:49.790326  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:51.790425  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:48.821947  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:51.321998  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:49.255902  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:51.256638  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:50.059060  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:50.071413  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:50.071489  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:50.104600  133241 cri.go:89] found id: ""
	I1210 01:11:50.104632  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.104644  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:50.104652  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:50.104715  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:50.136915  133241 cri.go:89] found id: ""
	I1210 01:11:50.136947  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.136957  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:50.136968  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:50.137038  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:50.172552  133241 cri.go:89] found id: ""
	I1210 01:11:50.172582  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.172593  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:50.172604  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:50.172668  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:50.202583  133241 cri.go:89] found id: ""
	I1210 01:11:50.202613  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.202626  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:50.202634  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:50.202696  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:50.232446  133241 cri.go:89] found id: ""
	I1210 01:11:50.232473  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.232483  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:50.232491  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:50.232555  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:50.271296  133241 cri.go:89] found id: ""
	I1210 01:11:50.271321  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.271332  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:50.271340  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:50.271404  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:50.304185  133241 cri.go:89] found id: ""
	I1210 01:11:50.304216  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.304227  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:50.304235  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:50.304298  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:50.338004  133241 cri.go:89] found id: ""
	I1210 01:11:50.338030  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.338041  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:50.338051  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:50.338066  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:50.374374  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:50.374403  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:50.427315  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:50.427346  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:50.439862  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:50.439890  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:50.505410  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:50.505441  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:50.505458  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:53.081065  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:53.093760  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:53.093816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:53.126125  133241 cri.go:89] found id: ""
	I1210 01:11:53.126160  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.126172  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:53.126180  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:53.126252  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:53.157694  133241 cri.go:89] found id: ""
	I1210 01:11:53.157719  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.157727  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:53.157732  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:53.157787  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:53.188784  133241 cri.go:89] found id: ""
	I1210 01:11:53.188812  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.188820  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:53.188826  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:53.188882  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:53.220025  133241 cri.go:89] found id: ""
	I1210 01:11:53.220056  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.220066  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:53.220074  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:53.220133  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:53.254601  133241 cri.go:89] found id: ""
	I1210 01:11:53.254632  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.254641  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:53.254649  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:53.254718  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:53.286858  133241 cri.go:89] found id: ""
	I1210 01:11:53.286896  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.286906  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:53.286917  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:53.286979  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:53.322063  133241 cri.go:89] found id: ""
	I1210 01:11:53.322087  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.322096  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:53.322104  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:53.322175  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:53.353598  133241 cri.go:89] found id: ""
	I1210 01:11:53.353624  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.353632  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:53.353641  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:53.353653  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:53.400634  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:53.400660  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:53.412838  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:53.412870  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:53.475152  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:53.475176  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:53.475191  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:53.551193  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:53.551236  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:54.290077  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:56.290911  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:53.322201  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:55.821982  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:53.257982  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:55.756075  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:56.089703  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:56.102065  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:56.102158  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:56.137385  133241 cri.go:89] found id: ""
	I1210 01:11:56.137410  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.137418  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:56.137424  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:56.137489  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:56.173717  133241 cri.go:89] found id: ""
	I1210 01:11:56.173748  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.173756  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:56.173762  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:56.173823  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:56.209007  133241 cri.go:89] found id: ""
	I1210 01:11:56.209031  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.209038  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:56.209044  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:56.209106  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:56.247599  133241 cri.go:89] found id: ""
	I1210 01:11:56.247628  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.247636  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:56.247642  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:56.247701  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:56.279510  133241 cri.go:89] found id: ""
	I1210 01:11:56.279535  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.279544  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:56.279550  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:56.279600  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:56.311644  133241 cri.go:89] found id: ""
	I1210 01:11:56.311665  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.311672  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:56.311678  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:56.311722  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:56.343277  133241 cri.go:89] found id: ""
	I1210 01:11:56.343306  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.343317  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:56.343324  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:56.343384  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:56.396352  133241 cri.go:89] found id: ""
	I1210 01:11:56.396380  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.396388  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:56.396397  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:56.396409  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:56.408726  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:56.408754  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:56.483943  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:56.483970  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:56.483987  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:56.566841  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:56.566874  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:56.604048  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:56.604083  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:59.154979  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:59.167727  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:59.167803  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:59.198861  133241 cri.go:89] found id: ""
	I1210 01:11:59.198886  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.198894  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:59.198901  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:59.198953  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:59.232900  133241 cri.go:89] found id: ""
	I1210 01:11:59.232935  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.232947  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:59.232955  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:59.233024  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:59.267532  133241 cri.go:89] found id: ""
	I1210 01:11:59.267558  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.267566  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:59.267571  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:59.267633  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:59.298091  133241 cri.go:89] found id: ""
	I1210 01:11:59.298120  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.298130  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:59.298140  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:59.298199  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:59.327848  133241 cri.go:89] found id: ""
	I1210 01:11:59.327879  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.327889  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:59.327897  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:59.327957  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:59.356570  133241 cri.go:89] found id: ""
	I1210 01:11:59.356601  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.356617  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:59.356626  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:59.356686  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:59.387756  133241 cri.go:89] found id: ""
	I1210 01:11:59.387780  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.387788  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:59.387793  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:59.387843  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:59.419836  133241 cri.go:89] found id: ""
	I1210 01:11:59.419869  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.419878  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:59.419887  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:59.419902  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:59.469663  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:59.469697  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:59.482738  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:59.482768  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:59.548687  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:59.548717  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:59.548739  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:58.790282  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:01.290379  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:58.320794  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:00.821991  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:57.756197  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:00.256511  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:59.625772  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:59.625809  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:02.163527  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:02.175510  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:02.175569  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:02.209432  133241 cri.go:89] found id: ""
	I1210 01:12:02.209462  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.209474  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:02.209481  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:02.209535  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:02.241027  133241 cri.go:89] found id: ""
	I1210 01:12:02.241050  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.241059  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:02.241064  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:02.241113  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:02.272251  133241 cri.go:89] found id: ""
	I1210 01:12:02.272277  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.272286  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:02.272293  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:02.272355  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:02.305879  133241 cri.go:89] found id: ""
	I1210 01:12:02.305903  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.305913  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:02.305920  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:02.305978  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:02.339219  133241 cri.go:89] found id: ""
	I1210 01:12:02.339248  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.339263  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:02.339271  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:02.339333  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:02.375203  133241 cri.go:89] found id: ""
	I1210 01:12:02.375240  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.375252  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:02.375260  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:02.375326  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:02.406364  133241 cri.go:89] found id: ""
	I1210 01:12:02.406396  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.406406  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:02.406413  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:02.406472  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:02.441572  133241 cri.go:89] found id: ""
	I1210 01:12:02.441602  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.441614  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:02.441627  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:02.441642  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:02.454215  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:02.454241  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:02.526345  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:02.526368  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:02.526388  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:02.603813  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:02.603845  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:02.640102  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:02.640136  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:03.291135  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.792322  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:03.321084  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.322066  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:02.755961  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.256774  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.189319  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:05.201957  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:05.202022  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:05.242211  133241 cri.go:89] found id: ""
	I1210 01:12:05.242238  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.242247  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:05.242253  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:05.242300  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:05.277287  133241 cri.go:89] found id: ""
	I1210 01:12:05.277309  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.277317  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:05.277323  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:05.277382  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:05.309455  133241 cri.go:89] found id: ""
	I1210 01:12:05.309480  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.309488  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:05.309493  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:05.309540  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:05.344117  133241 cri.go:89] found id: ""
	I1210 01:12:05.344143  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.344156  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:05.344164  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:05.344222  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:05.375039  133241 cri.go:89] found id: ""
	I1210 01:12:05.375067  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.375079  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:05.375086  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:05.375146  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:05.407623  133241 cri.go:89] found id: ""
	I1210 01:12:05.407649  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.407657  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:05.407665  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:05.407723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:05.441018  133241 cri.go:89] found id: ""
	I1210 01:12:05.441047  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.441055  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:05.441061  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:05.441123  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:05.471864  133241 cri.go:89] found id: ""
	I1210 01:12:05.471895  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.471907  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:05.471918  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:05.471931  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:05.536855  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:05.536881  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:05.536896  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:05.617577  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:05.617612  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:05.654150  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:05.654188  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:05.707690  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:05.707720  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:08.220391  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:08.232904  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:08.232961  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:08.271892  133241 cri.go:89] found id: ""
	I1210 01:12:08.271921  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.271933  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:08.271939  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:08.272004  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:08.304534  133241 cri.go:89] found id: ""
	I1210 01:12:08.304556  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.304563  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:08.304569  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:08.304620  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:08.338410  133241 cri.go:89] found id: ""
	I1210 01:12:08.338441  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.338451  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:08.338459  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:08.338523  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:08.370412  133241 cri.go:89] found id: ""
	I1210 01:12:08.370438  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.370449  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:08.370456  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:08.370515  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:08.401137  133241 cri.go:89] found id: ""
	I1210 01:12:08.401161  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.401169  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:08.401175  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:08.401224  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:08.436185  133241 cri.go:89] found id: ""
	I1210 01:12:08.436220  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.436232  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:08.436241  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:08.436308  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:08.468648  133241 cri.go:89] found id: ""
	I1210 01:12:08.468677  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.468696  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:08.468704  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:08.468764  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:08.506817  133241 cri.go:89] found id: ""
	I1210 01:12:08.506852  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.506865  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:08.506878  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:08.506898  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:08.565209  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:08.565240  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:08.581630  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:08.581675  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:08.663163  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:08.663189  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:08.663201  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:08.744843  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:08.744888  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:08.290806  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:10.790414  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:07.821280  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:09.821710  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:07.755386  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:09.759064  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:12.256087  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:11.282449  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:11.295381  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:11.295443  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:11.328119  133241 cri.go:89] found id: ""
	I1210 01:12:11.328145  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.328156  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:11.328162  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:11.328215  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:11.360864  133241 cri.go:89] found id: ""
	I1210 01:12:11.360895  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.360906  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:11.360914  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:11.360979  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:11.394838  133241 cri.go:89] found id: ""
	I1210 01:12:11.394862  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.394871  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:11.394876  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:11.394928  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:11.424174  133241 cri.go:89] found id: ""
	I1210 01:12:11.424216  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.424228  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:11.424236  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:11.424295  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:11.455057  133241 cri.go:89] found id: ""
	I1210 01:12:11.455083  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.455095  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:11.455102  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:11.455173  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:11.485755  133241 cri.go:89] found id: ""
	I1210 01:12:11.485783  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.485791  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:11.485797  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:11.485850  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:11.516921  133241 cri.go:89] found id: ""
	I1210 01:12:11.516952  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.516963  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:11.516970  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:11.517029  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:11.547484  133241 cri.go:89] found id: ""
	I1210 01:12:11.547510  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.547518  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:11.547527  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:11.547540  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:11.582392  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:11.582419  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:11.635271  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:11.635306  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:11.647460  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:11.647492  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:11.713562  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:11.713584  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:11.713599  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:14.299112  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:14.314813  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:14.314886  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:14.365870  133241 cri.go:89] found id: ""
	I1210 01:12:14.365907  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.365925  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:14.365934  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:14.365998  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:14.399023  133241 cri.go:89] found id: ""
	I1210 01:12:14.399046  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.399054  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:14.399060  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:14.399106  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:14.432464  133241 cri.go:89] found id: ""
	I1210 01:12:14.432490  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.432498  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:14.432504  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:14.432559  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:14.462625  133241 cri.go:89] found id: ""
	I1210 01:12:14.462657  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.462668  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:14.462675  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:14.462723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:14.494853  133241 cri.go:89] found id: ""
	I1210 01:12:14.494884  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.494895  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:14.494903  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:14.494968  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:14.528863  133241 cri.go:89] found id: ""
	I1210 01:12:14.528898  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.528909  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:14.528917  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:14.528985  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:14.563527  133241 cri.go:89] found id: ""
	I1210 01:12:14.563557  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.563568  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:14.563575  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:14.563633  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:14.592383  133241 cri.go:89] found id: ""
	I1210 01:12:14.592419  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.592429  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:14.592440  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:14.592453  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:14.604471  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:14.604498  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:12:12.790681  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:15.289761  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:12.321375  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:14.321765  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:16.820568  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:14.256568  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:16.755323  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	W1210 01:12:14.671647  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:14.671673  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:14.671686  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:14.749619  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:14.749648  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:14.783668  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:14.783700  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:17.337203  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:17.349666  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:17.349726  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:17.380558  133241 cri.go:89] found id: ""
	I1210 01:12:17.380584  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.380595  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:17.380603  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:17.380663  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:17.413026  133241 cri.go:89] found id: ""
	I1210 01:12:17.413060  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.413072  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:17.413080  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:17.413149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:17.444972  133241 cri.go:89] found id: ""
	I1210 01:12:17.445003  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.445014  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:17.445022  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:17.445081  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:17.477555  133241 cri.go:89] found id: ""
	I1210 01:12:17.477580  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.477588  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:17.477594  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:17.477641  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:17.508550  133241 cri.go:89] found id: ""
	I1210 01:12:17.508574  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.508582  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:17.508588  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:17.508671  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:17.538537  133241 cri.go:89] found id: ""
	I1210 01:12:17.538574  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.538586  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:17.538594  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:17.538655  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:17.571816  133241 cri.go:89] found id: ""
	I1210 01:12:17.571843  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.571851  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:17.571859  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:17.571916  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:17.602437  133241 cri.go:89] found id: ""
	I1210 01:12:17.602465  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.602484  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:17.602502  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:17.602517  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:17.652904  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:17.652936  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:17.664983  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:17.665006  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:17.732580  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:17.732606  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:17.732622  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:17.813561  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:17.813598  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:17.290624  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:19.291031  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:21.790058  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:18.821021  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:20.821538  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:18.755611  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:20.756570  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:20.349846  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:20.361680  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:20.361816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:20.394316  133241 cri.go:89] found id: ""
	I1210 01:12:20.394338  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.394345  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:20.394350  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:20.394395  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:20.432172  133241 cri.go:89] found id: ""
	I1210 01:12:20.432196  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.432204  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:20.432209  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:20.432256  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:20.464019  133241 cri.go:89] found id: ""
	I1210 01:12:20.464042  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.464049  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:20.464055  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:20.464101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:20.496239  133241 cri.go:89] found id: ""
	I1210 01:12:20.496264  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.496271  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:20.496277  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:20.496325  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:20.527890  133241 cri.go:89] found id: ""
	I1210 01:12:20.527920  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.527932  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:20.527939  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:20.527996  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:20.558333  133241 cri.go:89] found id: ""
	I1210 01:12:20.558360  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.558368  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:20.558374  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:20.558425  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:20.589431  133241 cri.go:89] found id: ""
	I1210 01:12:20.589461  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.589472  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:20.589480  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:20.589542  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:20.618988  133241 cri.go:89] found id: ""
	I1210 01:12:20.619018  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.619032  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:20.619042  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:20.619056  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:20.669620  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:20.669648  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:20.681405  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:20.681428  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:20.745196  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:20.745226  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:20.745243  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:20.823522  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:20.823548  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:23.360499  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:23.373249  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:23.373315  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:23.405186  133241 cri.go:89] found id: ""
	I1210 01:12:23.405207  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.405215  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:23.405224  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:23.405269  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:23.440082  133241 cri.go:89] found id: ""
	I1210 01:12:23.440118  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.440138  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:23.440146  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:23.440217  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:23.473962  133241 cri.go:89] found id: ""
	I1210 01:12:23.473991  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.474001  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:23.474010  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:23.474066  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:23.505004  133241 cri.go:89] found id: ""
	I1210 01:12:23.505028  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.505036  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:23.505042  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:23.505087  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:23.539383  133241 cri.go:89] found id: ""
	I1210 01:12:23.539416  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.539427  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:23.539435  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:23.539502  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:23.569371  133241 cri.go:89] found id: ""
	I1210 01:12:23.569402  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.569412  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:23.569420  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:23.569487  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:23.599718  133241 cri.go:89] found id: ""
	I1210 01:12:23.599740  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.599748  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:23.599754  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:23.599798  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:23.633483  133241 cri.go:89] found id: ""
	I1210 01:12:23.633513  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.633527  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:23.633538  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:23.633572  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:23.645791  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:23.645814  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:23.706819  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:23.706842  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:23.706858  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:23.792257  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:23.792283  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:23.832356  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:23.832384  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:23.790991  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:26.289467  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:23.321221  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:25.321373  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:23.256427  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:25.256459  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:27.257652  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:26.383157  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:26.395778  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:26.395834  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:26.428709  133241 cri.go:89] found id: ""
	I1210 01:12:26.428738  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.428750  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:26.428758  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:26.428823  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:26.463421  133241 cri.go:89] found id: ""
	I1210 01:12:26.463451  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.463470  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:26.463479  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:26.463541  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:26.494783  133241 cri.go:89] found id: ""
	I1210 01:12:26.494813  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.494826  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:26.494834  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:26.494894  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:26.524395  133241 cri.go:89] found id: ""
	I1210 01:12:26.524423  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.524434  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:26.524442  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:26.524505  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:26.554102  133241 cri.go:89] found id: ""
	I1210 01:12:26.554135  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.554146  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:26.554153  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:26.554218  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:26.584091  133241 cri.go:89] found id: ""
	I1210 01:12:26.584119  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.584127  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:26.584133  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:26.584188  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:26.618194  133241 cri.go:89] found id: ""
	I1210 01:12:26.618221  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.618229  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:26.618234  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:26.618282  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:26.652597  133241 cri.go:89] found id: ""
	I1210 01:12:26.652632  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.652643  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:26.652657  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:26.652674  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:26.724236  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:26.724262  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:26.724277  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:26.802706  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:26.802745  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:26.851153  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:26.851184  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:26.902459  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:26.902489  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:29.415298  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:29.428093  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:29.428168  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:29.460651  133241 cri.go:89] found id: ""
	I1210 01:12:29.460678  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.460686  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:29.460692  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:29.460745  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:29.490971  133241 cri.go:89] found id: ""
	I1210 01:12:29.491000  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.491009  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:29.491015  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:29.491064  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:29.521465  133241 cri.go:89] found id: ""
	I1210 01:12:29.521496  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.521509  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:29.521517  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:29.521592  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:29.555709  133241 cri.go:89] found id: ""
	I1210 01:12:29.555736  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.555744  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:29.555750  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:29.555812  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:29.589891  133241 cri.go:89] found id: ""
	I1210 01:12:29.589918  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.589928  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:29.589935  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:29.590006  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:29.620929  133241 cri.go:89] found id: ""
	I1210 01:12:29.620959  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.620989  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:29.620998  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:29.621060  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:28.290708  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:30.290750  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:27.822436  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:30.320877  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:29.756698  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:31.756872  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:29.652297  133241 cri.go:89] found id: ""
	I1210 01:12:29.652322  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.652332  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:29.652339  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:29.652400  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:29.685881  133241 cri.go:89] found id: ""
	I1210 01:12:29.685904  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.685912  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:29.685922  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:29.685936  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:29.734856  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:29.734889  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:29.747270  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:29.747297  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:29.811253  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:29.811276  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:29.811292  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:29.888151  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:29.888187  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:32.425743  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:32.438647  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:32.438723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:32.477466  133241 cri.go:89] found id: ""
	I1210 01:12:32.477489  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.477498  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:32.477503  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:32.477553  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:32.509698  133241 cri.go:89] found id: ""
	I1210 01:12:32.509732  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.509746  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:32.509753  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:32.509811  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:32.540873  133241 cri.go:89] found id: ""
	I1210 01:12:32.540903  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.540911  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:32.540919  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:32.540981  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:32.571143  133241 cri.go:89] found id: ""
	I1210 01:12:32.571168  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.571179  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:32.571186  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:32.571253  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:32.604797  133241 cri.go:89] found id: ""
	I1210 01:12:32.604829  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.604839  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:32.604847  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:32.604902  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:32.640179  133241 cri.go:89] found id: ""
	I1210 01:12:32.640204  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.640212  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:32.640218  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:32.640265  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:32.671103  133241 cri.go:89] found id: ""
	I1210 01:12:32.671130  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.671138  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:32.671144  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:32.671195  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:32.709038  133241 cri.go:89] found id: ""
	I1210 01:12:32.709069  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.709080  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:32.709092  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:32.709107  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:32.764933  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:32.764963  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:32.777149  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:32.777172  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:32.842233  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:32.842256  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:32.842273  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:32.923533  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:32.923569  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:32.291302  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:34.790708  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:32.321782  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:34.821161  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:36.821244  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:34.256937  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:36.756894  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:35.462284  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:35.476392  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:35.476465  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:35.509483  133241 cri.go:89] found id: ""
	I1210 01:12:35.509507  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.509515  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:35.509521  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:35.509568  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:35.546324  133241 cri.go:89] found id: ""
	I1210 01:12:35.546357  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.546369  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:35.546385  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:35.546457  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:35.580578  133241 cri.go:89] found id: ""
	I1210 01:12:35.580608  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.580618  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:35.580626  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:35.580695  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:35.613220  133241 cri.go:89] found id: ""
	I1210 01:12:35.613245  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.613253  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:35.613259  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:35.613318  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:35.650713  133241 cri.go:89] found id: ""
	I1210 01:12:35.650741  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.650751  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:35.650757  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:35.650826  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:35.685084  133241 cri.go:89] found id: ""
	I1210 01:12:35.685121  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.685134  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:35.685141  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:35.685196  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:35.717092  133241 cri.go:89] found id: ""
	I1210 01:12:35.717118  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.717130  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:35.717141  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:35.717197  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:35.753691  133241 cri.go:89] found id: ""
	I1210 01:12:35.753722  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.753732  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:35.753751  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:35.753766  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:35.807280  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:35.807314  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:35.821862  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:35.821894  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:35.892640  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:35.892667  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:35.892684  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:35.967250  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:35.967291  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:38.505643  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:38.518703  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:38.518762  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:38.554866  133241 cri.go:89] found id: ""
	I1210 01:12:38.554904  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.554917  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:38.554926  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:38.554983  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:38.586725  133241 cri.go:89] found id: ""
	I1210 01:12:38.586757  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.586770  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:38.586779  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:38.586840  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:38.617766  133241 cri.go:89] found id: ""
	I1210 01:12:38.617791  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.617799  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:38.617804  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:38.617855  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:38.647743  133241 cri.go:89] found id: ""
	I1210 01:12:38.647770  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.647779  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:38.647785  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:38.647838  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:38.680523  133241 cri.go:89] found id: ""
	I1210 01:12:38.680553  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.680564  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:38.680572  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:38.680634  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:38.714271  133241 cri.go:89] found id: ""
	I1210 01:12:38.714299  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.714307  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:38.714314  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:38.714366  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:38.751180  133241 cri.go:89] found id: ""
	I1210 01:12:38.751213  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.751226  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:38.751235  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:38.751307  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:38.783754  133241 cri.go:89] found id: ""
	I1210 01:12:38.783778  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.783787  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:38.783796  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:38.783807  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:38.843285  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:38.843332  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:38.856901  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:38.856935  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:38.923720  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:38.923747  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:38.923764  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:39.002855  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:39.002898  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:37.290816  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:38.785325  132693 pod_ready.go:82] duration metric: took 4m0.000828619s for pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace to be "Ready" ...
	E1210 01:12:38.785348  132693 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 01:12:38.785371  132693 pod_ready.go:39] duration metric: took 4m7.530994938s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:12:38.785436  132693 kubeadm.go:597] duration metric: took 4m15.56153133s to restartPrimaryControlPlane
	W1210 01:12:38.785555  132693 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 01:12:38.785612  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:12:38.822192  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:41.321407  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:39.256018  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:41.256892  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:41.542152  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:41.556438  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:41.556517  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:41.587666  133241 cri.go:89] found id: ""
	I1210 01:12:41.587695  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.587706  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:41.587714  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:41.587772  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:41.620472  133241 cri.go:89] found id: ""
	I1210 01:12:41.620498  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.620506  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:41.620512  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:41.620568  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:41.653153  133241 cri.go:89] found id: ""
	I1210 01:12:41.653196  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.653209  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:41.653217  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:41.653275  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:41.685358  133241 cri.go:89] found id: ""
	I1210 01:12:41.685387  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.685395  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:41.685401  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:41.685459  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:41.715972  133241 cri.go:89] found id: ""
	I1210 01:12:41.715996  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.716004  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:41.716010  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:41.716058  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:41.750651  133241 cri.go:89] found id: ""
	I1210 01:12:41.750684  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.750695  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:41.750703  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:41.750781  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:41.788845  133241 cri.go:89] found id: ""
	I1210 01:12:41.788872  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.788882  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:41.788890  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:41.788953  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:41.821679  133241 cri.go:89] found id: ""
	I1210 01:12:41.821705  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.821716  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:41.821726  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:41.821741  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:41.873177  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:41.873207  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:41.885639  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:41.885663  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:41.954882  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:41.954906  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:41.954922  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:42.032868  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:42.032911  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:44.569896  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:44.582137  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:44.582239  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:44.613216  133241 cri.go:89] found id: ""
	I1210 01:12:44.613245  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.613255  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:44.613264  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:44.613326  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:43.820651  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:45.821203  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:43.755681  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:45.755860  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:44.642860  133241 cri.go:89] found id: ""
	I1210 01:12:44.642887  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.642897  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:44.642904  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:44.642961  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:44.675879  133241 cri.go:89] found id: ""
	I1210 01:12:44.675908  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.675920  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:44.675928  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:44.675992  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:44.705466  133241 cri.go:89] found id: ""
	I1210 01:12:44.705490  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.705499  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:44.705505  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:44.705552  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:44.740999  133241 cri.go:89] found id: ""
	I1210 01:12:44.741029  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.741038  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:44.741043  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:44.741101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:44.774933  133241 cri.go:89] found id: ""
	I1210 01:12:44.774963  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.774974  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:44.774981  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:44.775044  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:44.806061  133241 cri.go:89] found id: ""
	I1210 01:12:44.806085  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.806093  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:44.806100  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:44.806163  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:44.837759  133241 cri.go:89] found id: ""
	I1210 01:12:44.837781  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.837789  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:44.837797  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:44.837808  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:44.872830  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:44.872881  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:44.925476  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:44.925505  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:44.937814  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:44.937838  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:45.012002  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:45.012029  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:45.012046  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:47.589735  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:47.603668  133241 kubeadm.go:597] duration metric: took 4m3.306612686s to restartPrimaryControlPlane
	W1210 01:12:47.603739  133241 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 01:12:47.603761  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:12:48.154198  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:12:48.167608  133241 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:12:48.176803  133241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:12:48.185508  133241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:12:48.185527  133241 kubeadm.go:157] found existing configuration files:
	
	I1210 01:12:48.185572  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:12:48.193940  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:12:48.193992  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:12:48.202384  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:12:48.210626  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:12:48.210672  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:12:48.219377  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:12:48.227459  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:12:48.227493  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:12:48.235967  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:12:48.244142  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:12:48.244177  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:12:48.252961  133241 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:12:48.323011  133241 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 01:12:48.323104  133241 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:12:48.458259  133241 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:12:48.458424  133241 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:12:48.458536  133241 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 01:12:48.630626  133241 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:12:48.632393  133241 out.go:235]   - Generating certificates and keys ...
	I1210 01:12:48.632510  133241 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:12:48.632611  133241 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:12:48.633714  133241 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:12:48.633800  133241 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:12:48.633862  133241 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:12:48.633957  133241 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:12:48.634058  133241 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:12:48.634151  133241 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:12:48.634265  133241 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:12:48.634426  133241 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:12:48.634546  133241 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:12:48.634640  133241 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:12:48.756866  133241 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:12:48.885589  133241 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:12:49.551602  133241 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:12:49.667812  133241 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:12:49.683125  133241 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:12:49.684322  133241 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:12:49.684390  133241 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:12:49.830086  133241 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:12:48.322646  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:50.821218  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:47.756532  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:49.757416  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:52.256110  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:49.831618  133241 out.go:235]   - Booting up control plane ...
	I1210 01:12:49.831733  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:12:49.836164  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:12:49.837117  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:12:49.845538  133241 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:12:49.848331  133241 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 01:12:53.320607  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:55.321218  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:54.256922  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:56.755279  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:57.321409  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:59.321826  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:01.821159  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:58.757281  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:01.256065  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:05.297959  132693 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.512320802s)
	I1210 01:13:05.298031  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:13:05.321593  132693 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:13:05.334072  132693 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:13:05.346063  132693 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:13:05.346089  132693 kubeadm.go:157] found existing configuration files:
	
	I1210 01:13:05.346143  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:13:05.360019  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:13:05.360087  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:13:05.372583  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:13:05.384130  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:13:05.384188  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:13:05.392629  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:13:05.400642  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:13:05.400700  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:13:05.410803  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:13:05.419350  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:13:05.419390  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:13:05.429452  132693 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:13:05.481014  132693 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 01:13:05.481092  132693 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:13:05.597528  132693 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:13:05.597654  132693 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:13:05.597756  132693 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 01:13:05.612251  132693 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:13:05.613988  132693 out.go:235]   - Generating certificates and keys ...
	I1210 01:13:05.614052  132693 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:13:05.614111  132693 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:13:05.614207  132693 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:13:05.614297  132693 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:13:05.614409  132693 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:13:05.614477  132693 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:13:05.614568  132693 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:13:05.614645  132693 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:13:05.614739  132693 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:13:05.614860  132693 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:13:05.614923  132693 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:13:05.615007  132693 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:13:05.946241  132693 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:13:06.262996  132693 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 01:13:06.492684  132693 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:13:06.618787  132693 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:13:06.805590  132693 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:13:06.806311  132693 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:13:06.808813  132693 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:13:06.810481  132693 out.go:235]   - Booting up control plane ...
	I1210 01:13:06.810631  132693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:13:06.810746  132693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:13:06.810812  132693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:13:03.821406  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:05.821749  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:03.756325  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:06.257324  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:06.832919  132693 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:13:06.839052  132693 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:13:06.839096  132693 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:13:06.969474  132693 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 01:13:06.969623  132693 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 01:13:07.971413  132693 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001911774s
	I1210 01:13:07.971493  132693 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 01:13:07.822174  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:09.822828  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:12.473566  132693 kubeadm.go:310] [api-check] The API server is healthy after 4.502020736s
	I1210 01:13:12.487877  132693 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 01:13:12.501570  132693 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 01:13:12.529568  132693 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 01:13:12.529808  132693 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-274758 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 01:13:12.539578  132693 kubeadm.go:310] [bootstrap-token] Using token: tq1yzs.mz19z1mkmh869v39
	I1210 01:13:08.757580  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:11.256597  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:12.540687  132693 out.go:235]   - Configuring RBAC rules ...
	I1210 01:13:12.540830  132693 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 01:13:12.546018  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 01:13:12.554335  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 01:13:12.557480  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 01:13:12.562006  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 01:13:12.568058  132693 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 01:13:12.880502  132693 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 01:13:13.367386  132693 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 01:13:13.879413  132693 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 01:13:13.880417  132693 kubeadm.go:310] 
	I1210 01:13:13.880519  132693 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 01:13:13.880541  132693 kubeadm.go:310] 
	I1210 01:13:13.880619  132693 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 01:13:13.880629  132693 kubeadm.go:310] 
	I1210 01:13:13.880662  132693 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 01:13:13.880741  132693 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 01:13:13.880829  132693 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 01:13:13.880851  132693 kubeadm.go:310] 
	I1210 01:13:13.880930  132693 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 01:13:13.880943  132693 kubeadm.go:310] 
	I1210 01:13:13.881016  132693 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 01:13:13.881029  132693 kubeadm.go:310] 
	I1210 01:13:13.881114  132693 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 01:13:13.881255  132693 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 01:13:13.881326  132693 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 01:13:13.881334  132693 kubeadm.go:310] 
	I1210 01:13:13.881429  132693 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 01:13:13.881542  132693 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 01:13:13.881553  132693 kubeadm.go:310] 
	I1210 01:13:13.881680  132693 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tq1yzs.mz19z1mkmh869v39 \
	I1210 01:13:13.881815  132693 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 \
	I1210 01:13:13.881843  132693 kubeadm.go:310] 	--control-plane 
	I1210 01:13:13.881854  132693 kubeadm.go:310] 
	I1210 01:13:13.881973  132693 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 01:13:13.881982  132693 kubeadm.go:310] 
	I1210 01:13:13.882072  132693 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tq1yzs.mz19z1mkmh869v39 \
	I1210 01:13:13.882230  132693 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 
	I1210 01:13:13.883146  132693 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:13:13.883196  132693 cni.go:84] Creating CNI manager for ""
	I1210 01:13:13.883217  132693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:13:13.885371  132693 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:13:13.886543  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:13:13.897482  132693 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:13:13.915107  132693 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 01:13:13.915244  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:13.915242  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-274758 minikube.k8s.io/updated_at=2024_12_10T01_13_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=embed-certs-274758 minikube.k8s.io/primary=true
	I1210 01:13:13.928635  132693 ops.go:34] apiserver oom_adj: -16
	I1210 01:13:14.131983  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:14.633015  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:15.132113  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:15.632347  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:16.132367  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:16.632749  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:12.321479  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:14.321663  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:16.822120  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:13.756549  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:15.751204  133282 pod_ready.go:82] duration metric: took 4m0.000700419s for pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace to be "Ready" ...
	E1210 01:13:15.751234  133282 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 01:13:15.751259  133282 pod_ready.go:39] duration metric: took 4m6.019142998s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:15.751290  133282 kubeadm.go:597] duration metric: took 4m13.842336769s to restartPrimaryControlPlane
	W1210 01:13:15.751381  133282 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 01:13:15.751413  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:13:17.132359  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:17.632050  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:18.132263  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:18.225462  132693 kubeadm.go:1113] duration metric: took 4.310260508s to wait for elevateKubeSystemPrivileges
	I1210 01:13:18.225504  132693 kubeadm.go:394] duration metric: took 4m55.046897812s to StartCluster
	I1210 01:13:18.225547  132693 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:18.225650  132693 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:13:18.227523  132693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:18.227776  132693 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 01:13:18.227852  132693 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 01:13:18.227928  132693 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-274758"
	I1210 01:13:18.227962  132693 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-274758"
	I1210 01:13:18.227961  132693 addons.go:69] Setting default-storageclass=true in profile "embed-certs-274758"
	I1210 01:13:18.227999  132693 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-274758"
	I1210 01:13:18.228012  132693 config.go:182] Loaded profile config "embed-certs-274758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1210 01:13:18.227973  132693 addons.go:243] addon storage-provisioner should already be in state true
	I1210 01:13:18.227983  132693 addons.go:69] Setting metrics-server=true in profile "embed-certs-274758"
	I1210 01:13:18.228079  132693 addons.go:234] Setting addon metrics-server=true in "embed-certs-274758"
	W1210 01:13:18.228096  132693 addons.go:243] addon metrics-server should already be in state true
	I1210 01:13:18.228130  132693 host.go:66] Checking if "embed-certs-274758" exists ...
	I1210 01:13:18.228085  132693 host.go:66] Checking if "embed-certs-274758" exists ...
	I1210 01:13:18.228468  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.228508  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.228521  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.228554  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.228608  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.228660  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.229260  132693 out.go:177] * Verifying Kubernetes components...
	I1210 01:13:18.230643  132693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:13:18.244916  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45913
	I1210 01:13:18.245098  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I1210 01:13:18.245389  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.245571  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.246186  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.246210  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.246288  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43885
	I1210 01:13:18.246344  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.246364  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.246598  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.246769  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.246771  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.246825  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.247215  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.247242  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.247367  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.247418  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.247638  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.248206  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.248244  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.250542  132693 addons.go:234] Setting addon default-storageclass=true in "embed-certs-274758"
	W1210 01:13:18.250579  132693 addons.go:243] addon default-storageclass should already be in state true
	I1210 01:13:18.250614  132693 host.go:66] Checking if "embed-certs-274758" exists ...
	I1210 01:13:18.250951  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.250999  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.265194  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36375
	I1210 01:13:18.265779  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43577
	I1210 01:13:18.266283  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.266478  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.267212  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.267234  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.267302  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40845
	I1210 01:13:18.267316  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.267329  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.267647  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.267700  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.268228  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.268248  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.268250  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.268276  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.268319  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.268679  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.268889  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.269065  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.271273  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:13:18.271495  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:13:18.272879  132693 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:13:18.272898  132693 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 01:13:18.274238  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 01:13:18.274260  132693 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 01:13:18.274279  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:13:18.274371  132693 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:18.274394  132693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 01:13:18.274411  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:13:18.278685  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.279199  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:13:18.279245  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.279405  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:13:18.279557  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:13:18.279684  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:13:18.279823  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:13:18.280345  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.281064  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:13:18.281083  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:13:18.281095  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.281282  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:13:18.281455  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:13:18.281643  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:13:18.285915  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36787
	I1210 01:13:18.286306  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.286727  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.286745  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.287055  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.287234  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.288732  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:13:18.288930  132693 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:18.288945  132693 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 01:13:18.288962  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:13:18.291528  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.291801  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:13:18.291821  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.291990  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:13:18.292175  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:13:18.292303  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:13:18.292532  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:13:18.426704  132693 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:13:18.454857  132693 node_ready.go:35] waiting up to 6m0s for node "embed-certs-274758" to be "Ready" ...
	I1210 01:13:18.470552  132693 node_ready.go:49] node "embed-certs-274758" has status "Ready":"True"
	I1210 01:13:18.470590  132693 node_ready.go:38] duration metric: took 15.702625ms for node "embed-certs-274758" to be "Ready" ...
	I1210 01:13:18.470604  132693 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:18.480748  132693 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bgjgh" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:18.569014  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 01:13:18.569040  132693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 01:13:18.605108  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 01:13:18.605137  132693 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 01:13:18.606158  132693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:18.614827  132693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:18.647542  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:18.647573  132693 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 01:13:18.726060  132693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:19.536876  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.536905  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.536988  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.537020  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.537177  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537215  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.537223  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.537234  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.537239  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537252  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.537261  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.537269  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.537324  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.537524  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537623  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.537922  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.537957  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537981  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.556234  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.556255  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.556555  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.556567  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.556572  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.977786  132693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.251679295s)
	I1210 01:13:19.977848  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.977861  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.978210  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.978227  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.978253  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.978288  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.978297  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.978536  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.978557  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.978581  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.978593  132693 addons.go:475] Verifying addon metrics-server=true in "embed-certs-274758"
	I1210 01:13:19.980096  132693 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 01:13:19.981147  132693 addons.go:510] duration metric: took 1.753302974s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 01:13:20.487221  132693 pod_ready.go:93] pod "coredns-7c65d6cfc9-bgjgh" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:20.487244  132693 pod_ready.go:82] duration metric: took 2.006464893s for pod "coredns-7c65d6cfc9-bgjgh" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:20.487253  132693 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:18.822687  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:21.322845  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:22.493358  132693 pod_ready.go:103] pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:24.993203  132693 pod_ready.go:103] pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:25.492646  132693 pod_ready.go:93] pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.492669  132693 pod_ready.go:82] duration metric: took 5.005410057s for pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.492679  132693 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.497102  132693 pod_ready.go:93] pod "etcd-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.497119  132693 pod_ready.go:82] duration metric: took 4.434391ms for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.497126  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.501166  132693 pod_ready.go:93] pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.501181  132693 pod_ready.go:82] duration metric: took 4.048875ms for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.501189  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.505541  132693 pod_ready.go:93] pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.505565  132693 pod_ready.go:82] duration metric: took 4.369889ms for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.505579  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-v28mz" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.509548  132693 pod_ready.go:93] pod "kube-proxy-v28mz" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.509562  132693 pod_ready.go:82] duration metric: took 3.977138ms for pod "kube-proxy-v28mz" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.509568  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:23.322966  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:25.820854  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:27.517005  132693 pod_ready.go:93] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:27.517027  132693 pod_ready.go:82] duration metric: took 2.007452032s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:27.517035  132693 pod_ready.go:39] duration metric: took 9.046411107s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:27.517052  132693 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:13:27.517101  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:13:27.531721  132693 api_server.go:72] duration metric: took 9.303907779s to wait for apiserver process to appear ...
	I1210 01:13:27.531750  132693 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:13:27.531768  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:13:27.536509  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 200:
	ok
	I1210 01:13:27.537428  132693 api_server.go:141] control plane version: v1.31.2
	I1210 01:13:27.537448  132693 api_server.go:131] duration metric: took 5.691563ms to wait for apiserver health ...
	I1210 01:13:27.537462  132693 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:13:27.693218  132693 system_pods.go:59] 9 kube-system pods found
	I1210 01:13:27.693251  132693 system_pods.go:61] "coredns-7c65d6cfc9-bgjgh" [277d23ef-ff20-414d-beb6-c6982712a423] Running
	I1210 01:13:27.693257  132693 system_pods.go:61] "coredns-7c65d6cfc9-m4qgb" [41253d1b-c010-41e2-9286-e9930025e9ff] Running
	I1210 01:13:27.693265  132693 system_pods.go:61] "etcd-embed-certs-274758" [b0c51f0d-a638-4f4f-b3df-95c7794facc9] Running
	I1210 01:13:27.693269  132693 system_pods.go:61] "kube-apiserver-embed-certs-274758" [ecdc5ff5-4d07-4802-98b8-8382132f7748] Running
	I1210 01:13:27.693273  132693 system_pods.go:61] "kube-controller-manager-embed-certs-274758" [e42fce81-4a93-498b-8897-31387873d181] Running
	I1210 01:13:27.693276  132693 system_pods.go:61] "kube-proxy-v28mz" [5cd47cc1-a085-4e77-850d-dde0c8ed6054] Running
	I1210 01:13:27.693279  132693 system_pods.go:61] "kube-scheduler-embed-certs-274758" [609e1329-cd71-4772-96cc-24b3620c511d] Running
	I1210 01:13:27.693285  132693 system_pods.go:61] "metrics-server-6867b74b74-mcw2c" [a7b75933-124c-4577-b26a-ad1c5c128910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:13:27.693289  132693 system_pods.go:61] "storage-provisioner" [71e4d38f-b0fe-43cf-a844-ba787287fda6] Running
	I1210 01:13:27.693296  132693 system_pods.go:74] duration metric: took 155.828167ms to wait for pod list to return data ...
	I1210 01:13:27.693305  132693 default_sa.go:34] waiting for default service account to be created ...
	I1210 01:13:27.891018  132693 default_sa.go:45] found service account: "default"
	I1210 01:13:27.891046  132693 default_sa.go:55] duration metric: took 197.731166ms for default service account to be created ...
	I1210 01:13:27.891055  132693 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 01:13:28.095967  132693 system_pods.go:86] 9 kube-system pods found
	I1210 01:13:28.095996  132693 system_pods.go:89] "coredns-7c65d6cfc9-bgjgh" [277d23ef-ff20-414d-beb6-c6982712a423] Running
	I1210 01:13:28.096002  132693 system_pods.go:89] "coredns-7c65d6cfc9-m4qgb" [41253d1b-c010-41e2-9286-e9930025e9ff] Running
	I1210 01:13:28.096006  132693 system_pods.go:89] "etcd-embed-certs-274758" [b0c51f0d-a638-4f4f-b3df-95c7794facc9] Running
	I1210 01:13:28.096010  132693 system_pods.go:89] "kube-apiserver-embed-certs-274758" [ecdc5ff5-4d07-4802-98b8-8382132f7748] Running
	I1210 01:13:28.096014  132693 system_pods.go:89] "kube-controller-manager-embed-certs-274758" [e42fce81-4a93-498b-8897-31387873d181] Running
	I1210 01:13:28.096017  132693 system_pods.go:89] "kube-proxy-v28mz" [5cd47cc1-a085-4e77-850d-dde0c8ed6054] Running
	I1210 01:13:28.096021  132693 system_pods.go:89] "kube-scheduler-embed-certs-274758" [609e1329-cd71-4772-96cc-24b3620c511d] Running
	I1210 01:13:28.096027  132693 system_pods.go:89] "metrics-server-6867b74b74-mcw2c" [a7b75933-124c-4577-b26a-ad1c5c128910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:13:28.096031  132693 system_pods.go:89] "storage-provisioner" [71e4d38f-b0fe-43cf-a844-ba787287fda6] Running
	I1210 01:13:28.096039  132693 system_pods.go:126] duration metric: took 204.97831ms to wait for k8s-apps to be running ...
	I1210 01:13:28.096047  132693 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 01:13:28.096091  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:13:28.109766  132693 system_svc.go:56] duration metric: took 13.710817ms WaitForService to wait for kubelet
	I1210 01:13:28.109807  132693 kubeadm.go:582] duration metric: took 9.881998931s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:13:28.109831  132693 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:13:28.290402  132693 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:13:28.290444  132693 node_conditions.go:123] node cpu capacity is 2
	I1210 01:13:28.290457  132693 node_conditions.go:105] duration metric: took 180.620817ms to run NodePressure ...
	I1210 01:13:28.290472  132693 start.go:241] waiting for startup goroutines ...
	I1210 01:13:28.290478  132693 start.go:246] waiting for cluster config update ...
	I1210 01:13:28.290489  132693 start.go:255] writing updated cluster config ...
	I1210 01:13:28.290756  132693 ssh_runner.go:195] Run: rm -f paused
	I1210 01:13:28.341573  132693 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 01:13:28.343695  132693 out.go:177] * Done! kubectl is now configured to use "embed-certs-274758" cluster and "default" namespace by default
	I1210 01:13:28.321957  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:30.821091  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:29.849672  133241 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 01:13:29.850163  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:13:29.850412  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:13:33.322460  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:35.822120  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:34.850843  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:13:34.851064  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:13:38.321590  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:40.322421  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:41.903973  133282 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.152536348s)
	I1210 01:13:41.904058  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:13:41.922104  133282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:13:41.932781  133282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:13:41.949147  133282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:13:41.949169  133282 kubeadm.go:157] found existing configuration files:
	
	I1210 01:13:41.949234  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 01:13:41.961475  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:13:41.961531  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:13:41.973790  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 01:13:41.985658  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:13:41.985718  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:13:41.996851  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 01:13:42.005612  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:13:42.005661  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:13:42.016316  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 01:13:42.025097  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:13:42.025162  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:13:42.035841  133282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:13:42.204343  133282 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:13:42.820637  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:44.821863  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:46.822010  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:44.851525  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:13:44.851699  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:13:50.610797  133282 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 01:13:50.610879  133282 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:13:50.610976  133282 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:13:50.611138  133282 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:13:50.611235  133282 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 01:13:50.611363  133282 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:13:50.612870  133282 out.go:235]   - Generating certificates and keys ...
	I1210 01:13:50.612937  133282 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:13:50.612990  133282 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:13:50.613065  133282 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:13:50.613142  133282 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:13:50.613213  133282 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:13:50.613291  133282 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:13:50.613383  133282 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:13:50.613468  133282 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:13:50.613583  133282 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:13:50.613711  133282 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:13:50.613784  133282 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:13:50.613871  133282 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:13:50.613951  133282 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:13:50.614035  133282 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 01:13:50.614113  133282 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:13:50.614231  133282 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:13:50.614318  133282 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:13:50.614396  133282 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:13:50.614483  133282 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:13:50.615840  133282 out.go:235]   - Booting up control plane ...
	I1210 01:13:50.615917  133282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:13:50.615985  133282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:13:50.616068  133282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:13:50.616186  133282 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:13:50.616283  133282 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:13:50.616354  133282 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:13:50.616529  133282 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 01:13:50.616677  133282 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 01:13:50.616752  133282 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002388771s
	I1210 01:13:50.616858  133282 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 01:13:50.616942  133282 kubeadm.go:310] [api-check] The API server is healthy after 4.501731998s
	I1210 01:13:50.617063  133282 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 01:13:50.617214  133282 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 01:13:50.617302  133282 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 01:13:50.617556  133282 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-901295 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 01:13:50.617633  133282 kubeadm.go:310] [bootstrap-token] Using token: qm0b8q.vohlzpntqihfsj2x
	I1210 01:13:50.618774  133282 out.go:235]   - Configuring RBAC rules ...
	I1210 01:13:50.618896  133282 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 01:13:50.619001  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 01:13:50.619167  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 01:13:50.619286  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 01:13:50.619432  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 01:13:50.619563  133282 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 01:13:50.619724  133282 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 01:13:50.619788  133282 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 01:13:50.619855  133282 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 01:13:50.619865  133282 kubeadm.go:310] 
	I1210 01:13:50.619958  133282 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 01:13:50.619970  133282 kubeadm.go:310] 
	I1210 01:13:50.620071  133282 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 01:13:50.620084  133282 kubeadm.go:310] 
	I1210 01:13:50.620133  133282 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 01:13:50.620214  133282 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 01:13:50.620290  133282 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 01:13:50.620299  133282 kubeadm.go:310] 
	I1210 01:13:50.620384  133282 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 01:13:50.620393  133282 kubeadm.go:310] 
	I1210 01:13:50.620464  133282 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 01:13:50.620480  133282 kubeadm.go:310] 
	I1210 01:13:50.620554  133282 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 01:13:50.620656  133282 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 01:13:50.620747  133282 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 01:13:50.620756  133282 kubeadm.go:310] 
	I1210 01:13:50.620862  133282 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 01:13:50.620978  133282 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 01:13:50.620994  133282 kubeadm.go:310] 
	I1210 01:13:50.621111  133282 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token qm0b8q.vohlzpntqihfsj2x \
	I1210 01:13:50.621255  133282 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 \
	I1210 01:13:50.621286  133282 kubeadm.go:310] 	--control-plane 
	I1210 01:13:50.621296  133282 kubeadm.go:310] 
	I1210 01:13:50.621365  133282 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 01:13:50.621374  133282 kubeadm.go:310] 
	I1210 01:13:50.621448  133282 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token qm0b8q.vohlzpntqihfsj2x \
	I1210 01:13:50.621569  133282 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 
	I1210 01:13:50.621593  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:13:50.621608  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:13:50.622943  133282 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:13:49.321854  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:51.815742  132605 pod_ready.go:82] duration metric: took 4m0.000382174s for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	E1210 01:13:51.815774  132605 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1210 01:13:51.815787  132605 pod_ready.go:39] duration metric: took 4m2.800798949s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:51.815811  132605 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:13:51.815854  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:13:51.815920  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:13:51.865972  132605 cri.go:89] found id: "0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:51.866004  132605 cri.go:89] found id: ""
	I1210 01:13:51.866015  132605 logs.go:282] 1 containers: [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c]
	I1210 01:13:51.866098  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.871589  132605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:13:51.871648  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:13:51.909231  132605 cri.go:89] found id: "bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:51.909256  132605 cri.go:89] found id: ""
	I1210 01:13:51.909266  132605 logs.go:282] 1 containers: [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490]
	I1210 01:13:51.909321  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.913562  132605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:13:51.913639  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:13:51.946623  132605 cri.go:89] found id: "7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:51.946651  132605 cri.go:89] found id: ""
	I1210 01:13:51.946661  132605 logs.go:282] 1 containers: [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04]
	I1210 01:13:51.946721  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.950686  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:13:51.950756  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:13:51.988821  132605 cri.go:89] found id: "c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:51.988845  132605 cri.go:89] found id: ""
	I1210 01:13:51.988856  132605 logs.go:282] 1 containers: [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692]
	I1210 01:13:51.988916  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.992776  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:13:51.992827  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:13:52.028882  132605 cri.go:89] found id: "eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:52.028910  132605 cri.go:89] found id: ""
	I1210 01:13:52.028920  132605 logs.go:282] 1 containers: [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77]
	I1210 01:13:52.028974  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.033384  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:13:52.033467  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:13:52.068002  132605 cri.go:89] found id: "7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:52.068030  132605 cri.go:89] found id: ""
	I1210 01:13:52.068038  132605 logs.go:282] 1 containers: [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f]
	I1210 01:13:52.068086  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.071868  132605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:13:52.071938  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:13:52.105726  132605 cri.go:89] found id: ""
	I1210 01:13:52.105751  132605 logs.go:282] 0 containers: []
	W1210 01:13:52.105760  132605 logs.go:284] No container was found matching "kindnet"
	I1210 01:13:52.105767  132605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 01:13:52.105822  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 01:13:52.146662  132605 cri.go:89] found id: "8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:52.146690  132605 cri.go:89] found id: "abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:52.146696  132605 cri.go:89] found id: ""
	I1210 01:13:52.146706  132605 logs.go:282] 2 containers: [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0]
	I1210 01:13:52.146769  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.150459  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.153921  132605 logs.go:123] Gathering logs for kube-proxy [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77] ...
	I1210 01:13:52.153942  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:52.197327  132605 logs.go:123] Gathering logs for kube-controller-manager [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f] ...
	I1210 01:13:52.197354  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:50.624049  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:13:50.634300  133282 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:13:50.650835  133282 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 01:13:50.650955  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:50.650957  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-901295 minikube.k8s.io/updated_at=2024_12_10T01_13_50_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=default-k8s-diff-port-901295 minikube.k8s.io/primary=true
	I1210 01:13:50.661855  133282 ops.go:34] apiserver oom_adj: -16
	I1210 01:13:50.846244  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:51.347288  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:51.846690  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:52.346721  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:52.846891  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:53.346360  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:53.846284  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:54.346480  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:54.846394  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:54.950848  133282 kubeadm.go:1113] duration metric: took 4.299939675s to wait for elevateKubeSystemPrivileges
	I1210 01:13:54.950893  133282 kubeadm.go:394] duration metric: took 4m53.095365109s to StartCluster
	I1210 01:13:54.950920  133282 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:54.951018  133282 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:13:54.952642  133282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:54.952903  133282 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 01:13:54.953028  133282 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 01:13:54.953103  133282 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-901295"
	I1210 01:13:54.953122  133282 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-901295"
	W1210 01:13:54.953130  133282 addons.go:243] addon storage-provisioner should already be in state true
	I1210 01:13:54.953144  133282 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-901295"
	I1210 01:13:54.953165  133282 host.go:66] Checking if "default-k8s-diff-port-901295" exists ...
	I1210 01:13:54.953165  133282 config.go:182] Loaded profile config "default-k8s-diff-port-901295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:13:54.953164  133282 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-901295"
	I1210 01:13:54.953175  133282 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-901295"
	I1210 01:13:54.953188  133282 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-901295"
	W1210 01:13:54.953197  133282 addons.go:243] addon metrics-server should already be in state true
	I1210 01:13:54.953236  133282 host.go:66] Checking if "default-k8s-diff-port-901295" exists ...
	I1210 01:13:54.953502  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.953544  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.953604  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.953648  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.953611  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.953720  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.954470  133282 out.go:177] * Verifying Kubernetes components...
	I1210 01:13:54.955825  133282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:13:54.969471  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43591
	I1210 01:13:54.969539  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42601
	I1210 01:13:54.969905  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.969971  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.970407  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.970427  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.970539  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.970606  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.970834  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.970902  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.971282  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.971314  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.971457  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.971503  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.971615  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33149
	I1210 01:13:54.971975  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.972424  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.972451  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.972757  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.972939  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:54.976290  133282 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-901295"
	W1210 01:13:54.976313  133282 addons.go:243] addon default-storageclass should already be in state true
	I1210 01:13:54.976344  133282 host.go:66] Checking if "default-k8s-diff-port-901295" exists ...
	I1210 01:13:54.976701  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.976743  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.987931  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35367
	I1210 01:13:54.988409  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.988950  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.988975  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.989395  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.989602  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:54.990179  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35279
	I1210 01:13:54.990660  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.991231  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.991256  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.991553  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:13:54.991804  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.991988  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:54.993375  133282 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 01:13:54.993895  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:13:54.993895  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38115
	I1210 01:13:54.994363  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.994661  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 01:13:54.994675  133282 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 01:13:54.994690  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:13:54.994864  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.994882  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.995298  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.995379  133282 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:13:54.995834  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.995881  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.996682  133282 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:54.996704  133282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 01:13:54.996721  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:13:55.000015  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000319  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:13:55.000343  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000361  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000321  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:13:55.000540  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:13:55.000637  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:13:55.000658  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000689  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:13:55.000819  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:13:55.000955  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:13:55.001529  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:13:55.001896  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:13:55.002167  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:13:55.013310  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45905
	I1210 01:13:55.013700  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:55.014199  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:55.014219  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:55.014556  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:55.014997  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:55.016445  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:13:55.016626  133282 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:55.016642  133282 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 01:13:55.016659  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:13:55.018941  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.019337  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:13:55.019358  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.019578  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:13:55.019718  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:13:55.019807  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:13:55.019887  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:13:55.152197  133282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:13:55.175962  133282 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-901295" to be "Ready" ...
	I1210 01:13:55.185748  133282 node_ready.go:49] node "default-k8s-diff-port-901295" has status "Ready":"True"
	I1210 01:13:55.185767  133282 node_ready.go:38] duration metric: took 9.765238ms for node "default-k8s-diff-port-901295" to be "Ready" ...
	I1210 01:13:55.185776  133282 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:55.193102  133282 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:55.268186  133282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:55.294420  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 01:13:55.294451  133282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 01:13:55.326324  133282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:55.338979  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 01:13:55.339009  133282 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 01:13:55.393682  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:55.393713  133282 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 01:13:55.482637  133282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:56.131482  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.131574  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.131524  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.131650  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.132095  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Closing plugin on server side
	I1210 01:13:56.132112  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132129  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.132133  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132138  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.132140  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.132148  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.132149  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.132207  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.132384  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132397  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.132501  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Closing plugin on server side
	I1210 01:13:56.132565  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132579  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.155188  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.155211  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.155515  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.155535  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.795811  133282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.313113399s)
	I1210 01:13:56.795879  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.795895  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.796326  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Closing plugin on server side
	I1210 01:13:56.796327  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.796353  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.796367  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.796379  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.796612  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.796628  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.796641  133282 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-901295"
	I1210 01:13:56.798189  133282 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 01:13:52.256305  132605 logs.go:123] Gathering logs for dmesg ...
	I1210 01:13:52.256333  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:13:52.269263  132605 logs.go:123] Gathering logs for kube-apiserver [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c] ...
	I1210 01:13:52.269288  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:52.310821  132605 logs.go:123] Gathering logs for coredns [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04] ...
	I1210 01:13:52.310855  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:52.348176  132605 logs.go:123] Gathering logs for kube-scheduler [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692] ...
	I1210 01:13:52.348204  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:52.399357  132605 logs.go:123] Gathering logs for storage-provisioner [abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0] ...
	I1210 01:13:52.399392  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:52.436240  132605 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:13:52.436272  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:13:52.962153  132605 logs.go:123] Gathering logs for container status ...
	I1210 01:13:52.962192  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:13:53.010091  132605 logs.go:123] Gathering logs for kubelet ...
	I1210 01:13:53.010127  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:13:53.082183  132605 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:13:53.082218  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:13:53.201521  132605 logs.go:123] Gathering logs for etcd [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490] ...
	I1210 01:13:53.201557  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:53.243675  132605 logs.go:123] Gathering logs for storage-provisioner [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6] ...
	I1210 01:13:53.243711  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:55.779907  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:13:55.796284  132605 api_server.go:72] duration metric: took 4m14.500959712s to wait for apiserver process to appear ...
	I1210 01:13:55.796314  132605 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:13:55.796358  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:13:55.796431  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:13:55.839067  132605 cri.go:89] found id: "0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:55.839098  132605 cri.go:89] found id: ""
	I1210 01:13:55.839107  132605 logs.go:282] 1 containers: [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c]
	I1210 01:13:55.839175  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.843310  132605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:13:55.843382  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:13:55.875863  132605 cri.go:89] found id: "bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:55.875888  132605 cri.go:89] found id: ""
	I1210 01:13:55.875896  132605 logs.go:282] 1 containers: [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490]
	I1210 01:13:55.875960  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.879748  132605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:13:55.879819  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:13:55.911243  132605 cri.go:89] found id: "7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:55.911269  132605 cri.go:89] found id: ""
	I1210 01:13:55.911279  132605 logs.go:282] 1 containers: [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04]
	I1210 01:13:55.911342  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.915201  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:13:55.915268  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:13:55.966280  132605 cri.go:89] found id: "c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:55.966308  132605 cri.go:89] found id: ""
	I1210 01:13:55.966318  132605 logs.go:282] 1 containers: [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692]
	I1210 01:13:55.966384  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.970278  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:13:55.970354  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:13:56.004675  132605 cri.go:89] found id: "eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:56.004706  132605 cri.go:89] found id: ""
	I1210 01:13:56.004722  132605 logs.go:282] 1 containers: [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77]
	I1210 01:13:56.004785  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.008534  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:13:56.008614  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:13:56.051252  132605 cri.go:89] found id: "7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:56.051282  132605 cri.go:89] found id: ""
	I1210 01:13:56.051293  132605 logs.go:282] 1 containers: [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f]
	I1210 01:13:56.051356  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.055160  132605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:13:56.055243  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:13:56.100629  132605 cri.go:89] found id: ""
	I1210 01:13:56.100660  132605 logs.go:282] 0 containers: []
	W1210 01:13:56.100672  132605 logs.go:284] No container was found matching "kindnet"
	I1210 01:13:56.100681  132605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 01:13:56.100749  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 01:13:56.140250  132605 cri.go:89] found id: "8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:56.140274  132605 cri.go:89] found id: "abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:56.140280  132605 cri.go:89] found id: ""
	I1210 01:13:56.140290  132605 logs.go:282] 2 containers: [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0]
	I1210 01:13:56.140352  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.145225  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.150128  132605 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:13:56.150151  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:13:56.273696  132605 logs.go:123] Gathering logs for kube-apiserver [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c] ...
	I1210 01:13:56.273730  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:56.323851  132605 logs.go:123] Gathering logs for etcd [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490] ...
	I1210 01:13:56.323884  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:56.375726  132605 logs.go:123] Gathering logs for kube-scheduler [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692] ...
	I1210 01:13:56.375763  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:56.430544  132605 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:13:56.430587  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:13:56.866412  132605 logs.go:123] Gathering logs for storage-provisioner [abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0] ...
	I1210 01:13:56.866505  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:56.901321  132605 logs.go:123] Gathering logs for container status ...
	I1210 01:13:56.901360  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:13:56.940068  132605 logs.go:123] Gathering logs for kubelet ...
	I1210 01:13:56.940107  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:13:57.010688  132605 logs.go:123] Gathering logs for dmesg ...
	I1210 01:13:57.010725  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:13:57.025463  132605 logs.go:123] Gathering logs for coredns [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04] ...
	I1210 01:13:57.025514  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:57.063908  132605 logs.go:123] Gathering logs for kube-proxy [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77] ...
	I1210 01:13:57.063939  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:57.102140  132605 logs.go:123] Gathering logs for kube-controller-manager [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f] ...
	I1210 01:13:57.102182  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:57.154429  132605 logs.go:123] Gathering logs for storage-provisioner [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6] ...
	I1210 01:13:57.154467  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:56.799397  133282 addons.go:510] duration metric: took 1.846376359s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 01:13:57.200860  133282 pod_ready.go:103] pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:59.697834  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:13:59.702097  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 200:
	ok
	I1210 01:13:59.703338  132605 api_server.go:141] control plane version: v1.31.2
	I1210 01:13:59.703360  132605 api_server.go:131] duration metric: took 3.907039005s to wait for apiserver health ...
	I1210 01:13:59.703368  132605 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:13:59.703389  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:13:59.703430  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:13:59.746795  132605 cri.go:89] found id: "0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:59.746815  132605 cri.go:89] found id: ""
	I1210 01:13:59.746822  132605 logs.go:282] 1 containers: [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c]
	I1210 01:13:59.746867  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.750673  132605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:13:59.750736  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:13:59.783121  132605 cri.go:89] found id: "bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:59.783154  132605 cri.go:89] found id: ""
	I1210 01:13:59.783163  132605 logs.go:282] 1 containers: [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490]
	I1210 01:13:59.783210  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.786822  132605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:13:59.786875  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:13:59.819075  132605 cri.go:89] found id: "7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:59.819096  132605 cri.go:89] found id: ""
	I1210 01:13:59.819103  132605 logs.go:282] 1 containers: [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04]
	I1210 01:13:59.819163  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.822836  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:13:59.822886  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:13:59.859388  132605 cri.go:89] found id: "c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:59.859418  132605 cri.go:89] found id: ""
	I1210 01:13:59.859428  132605 logs.go:282] 1 containers: [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692]
	I1210 01:13:59.859482  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.863388  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:13:59.863447  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:13:59.897967  132605 cri.go:89] found id: "eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:59.897987  132605 cri.go:89] found id: ""
	I1210 01:13:59.897994  132605 logs.go:282] 1 containers: [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77]
	I1210 01:13:59.898037  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.902198  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:13:59.902262  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:13:59.935685  132605 cri.go:89] found id: "7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:59.935713  132605 cri.go:89] found id: ""
	I1210 01:13:59.935724  132605 logs.go:282] 1 containers: [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f]
	I1210 01:13:59.935782  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.939600  132605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:13:59.939653  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:13:59.975763  132605 cri.go:89] found id: ""
	I1210 01:13:59.975797  132605 logs.go:282] 0 containers: []
	W1210 01:13:59.975810  132605 logs.go:284] No container was found matching "kindnet"
	I1210 01:13:59.975819  132605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 01:13:59.975886  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 01:14:00.014470  132605 cri.go:89] found id: "8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:14:00.014500  132605 cri.go:89] found id: "abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:14:00.014506  132605 cri.go:89] found id: ""
	I1210 01:14:00.014515  132605 logs.go:282] 2 containers: [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0]
	I1210 01:14:00.014589  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:14:00.018470  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:14:00.022628  132605 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:14:00.022650  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:14:00.126253  132605 logs.go:123] Gathering logs for kube-apiserver [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c] ...
	I1210 01:14:00.126280  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:14:00.168377  132605 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:14:00.168410  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:14:00.554305  132605 logs.go:123] Gathering logs for container status ...
	I1210 01:14:00.554349  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:14:00.597646  132605 logs.go:123] Gathering logs for kube-scheduler [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692] ...
	I1210 01:14:00.597673  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:14:00.638356  132605 logs.go:123] Gathering logs for kube-proxy [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77] ...
	I1210 01:14:00.638385  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:14:00.673027  132605 logs.go:123] Gathering logs for kube-controller-manager [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f] ...
	I1210 01:14:00.673058  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:14:00.736632  132605 logs.go:123] Gathering logs for storage-provisioner [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6] ...
	I1210 01:14:00.736667  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:14:00.771609  132605 logs.go:123] Gathering logs for kubelet ...
	I1210 01:14:00.771643  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:14:00.838511  132605 logs.go:123] Gathering logs for dmesg ...
	I1210 01:14:00.838542  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:14:00.853873  132605 logs.go:123] Gathering logs for etcd [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490] ...
	I1210 01:14:00.853901  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:14:00.903386  132605 logs.go:123] Gathering logs for coredns [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04] ...
	I1210 01:14:00.903417  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:14:00.940479  132605 logs.go:123] Gathering logs for storage-provisioner [abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0] ...
	I1210 01:14:00.940538  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:59.199815  133282 pod_ready.go:93] pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:59.199838  133282 pod_ready.go:82] duration metric: took 4.006706604s for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:59.199848  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:01.206809  133282 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:14:02.205417  133282 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:02.205439  133282 pod_ready.go:82] duration metric: took 3.005584799s for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:02.205449  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:03.479747  132605 system_pods.go:59] 8 kube-system pods found
	I1210 01:14:03.479776  132605 system_pods.go:61] "coredns-7c65d6cfc9-hhsm5" [dddb227a-7c16-4acd-be5f-1ab38b78129c] Running
	I1210 01:14:03.479781  132605 system_pods.go:61] "etcd-no-preload-584179" [acba2a13-196f-4ff9-8151-6b88578d532d] Running
	I1210 01:14:03.479785  132605 system_pods.go:61] "kube-apiserver-no-preload-584179" [20a67076-4ff2-4f31-b245-bf4079cd11d1] Running
	I1210 01:14:03.479789  132605 system_pods.go:61] "kube-controller-manager-no-preload-584179" [d7a2e531-1609-4c8a-b756-d9819339ed27] Running
	I1210 01:14:03.479791  132605 system_pods.go:61] "kube-proxy-xcjs2" [ec6cf5b1-3ea9-4868-874d-61e262cca0c5] Running
	I1210 01:14:03.479795  132605 system_pods.go:61] "kube-scheduler-no-preload-584179" [998543da-3056-4960-8762-9ab3dbd1925a] Running
	I1210 01:14:03.479800  132605 system_pods.go:61] "metrics-server-6867b74b74-lwgxd" [0e7f1063-8508-4f5b-b8ff-bbd387a53919] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:03.479804  132605 system_pods.go:61] "storage-provisioner" [31180637-f48e-4dda-8ec3-56155bb300cf] Running
	I1210 01:14:03.479813  132605 system_pods.go:74] duration metric: took 3.776438741s to wait for pod list to return data ...
	I1210 01:14:03.479820  132605 default_sa.go:34] waiting for default service account to be created ...
	I1210 01:14:03.482188  132605 default_sa.go:45] found service account: "default"
	I1210 01:14:03.482210  132605 default_sa.go:55] duration metric: took 2.383945ms for default service account to be created ...
	I1210 01:14:03.482218  132605 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 01:14:03.487172  132605 system_pods.go:86] 8 kube-system pods found
	I1210 01:14:03.487199  132605 system_pods.go:89] "coredns-7c65d6cfc9-hhsm5" [dddb227a-7c16-4acd-be5f-1ab38b78129c] Running
	I1210 01:14:03.487213  132605 system_pods.go:89] "etcd-no-preload-584179" [acba2a13-196f-4ff9-8151-6b88578d532d] Running
	I1210 01:14:03.487220  132605 system_pods.go:89] "kube-apiserver-no-preload-584179" [20a67076-4ff2-4f31-b245-bf4079cd11d1] Running
	I1210 01:14:03.487227  132605 system_pods.go:89] "kube-controller-manager-no-preload-584179" [d7a2e531-1609-4c8a-b756-d9819339ed27] Running
	I1210 01:14:03.487232  132605 system_pods.go:89] "kube-proxy-xcjs2" [ec6cf5b1-3ea9-4868-874d-61e262cca0c5] Running
	I1210 01:14:03.487239  132605 system_pods.go:89] "kube-scheduler-no-preload-584179" [998543da-3056-4960-8762-9ab3dbd1925a] Running
	I1210 01:14:03.487248  132605 system_pods.go:89] "metrics-server-6867b74b74-lwgxd" [0e7f1063-8508-4f5b-b8ff-bbd387a53919] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:03.487257  132605 system_pods.go:89] "storage-provisioner" [31180637-f48e-4dda-8ec3-56155bb300cf] Running
	I1210 01:14:03.487267  132605 system_pods.go:126] duration metric: took 5.043223ms to wait for k8s-apps to be running ...
	I1210 01:14:03.487278  132605 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 01:14:03.487331  132605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:14:03.503494  132605 system_svc.go:56] duration metric: took 16.208072ms WaitForService to wait for kubelet
	I1210 01:14:03.503520  132605 kubeadm.go:582] duration metric: took 4m22.208203921s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:14:03.503535  132605 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:14:03.506148  132605 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:14:03.506168  132605 node_conditions.go:123] node cpu capacity is 2
	I1210 01:14:03.506181  132605 node_conditions.go:105] duration metric: took 2.641093ms to run NodePressure ...
	I1210 01:14:03.506196  132605 start.go:241] waiting for startup goroutines ...
	I1210 01:14:03.506209  132605 start.go:246] waiting for cluster config update ...
	I1210 01:14:03.506228  132605 start.go:255] writing updated cluster config ...
	I1210 01:14:03.506542  132605 ssh_runner.go:195] Run: rm -f paused
	I1210 01:14:03.552082  132605 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 01:14:03.553885  132605 out.go:177] * Done! kubectl is now configured to use "no-preload-584179" cluster and "default" namespace by default
	I1210 01:14:04.212381  133282 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:14:05.212520  133282 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:05.212542  133282 pod_ready.go:82] duration metric: took 3.007086471s for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.212551  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mcrmk" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.218010  133282 pod_ready.go:93] pod "kube-proxy-mcrmk" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:05.218032  133282 pod_ready.go:82] duration metric: took 5.474042ms for pod "kube-proxy-mcrmk" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.218043  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.226656  133282 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:05.226677  133282 pod_ready.go:82] duration metric: took 8.62491ms for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.226685  133282 pod_ready.go:39] duration metric: took 10.040900009s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:14:05.226701  133282 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:14:05.226760  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:14:05.245203  133282 api_server.go:72] duration metric: took 10.292259038s to wait for apiserver process to appear ...
	I1210 01:14:05.245225  133282 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:14:05.245246  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:14:05.249103  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 200:
	ok
	I1210 01:14:05.250169  133282 api_server.go:141] control plane version: v1.31.2
	I1210 01:14:05.250186  133282 api_server.go:131] duration metric: took 4.954164ms to wait for apiserver health ...
	I1210 01:14:05.250191  133282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:14:05.256313  133282 system_pods.go:59] 9 kube-system pods found
	I1210 01:14:05.256338  133282 system_pods.go:61] "coredns-7c65d6cfc9-4snjr" [ee9574b0-7c13-4fd0-b268-47bef0687b7c] Running
	I1210 01:14:05.256343  133282 system_pods.go:61] "coredns-7c65d6cfc9-wr22x" [51e6d58d-7a5a-4739-94de-c53a8c8247ca] Running
	I1210 01:14:05.256347  133282 system_pods.go:61] "etcd-default-k8s-diff-port-901295" [3e38ffc7-a02d-4449-a166-29eca58e8545] Running
	I1210 01:14:05.256351  133282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-901295" [868a8472-8c93-4376-98c9-4d2bada6bfc0] Running
	I1210 01:14:05.256355  133282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-901295" [730047f5-2dbe-4a89-858a-33e2cc0f52eb] Running
	I1210 01:14:05.256358  133282 system_pods.go:61] "kube-proxy-mcrmk" [ffc0f612-5484-46b4-9515-41e0a981287f] Running
	I1210 01:14:05.256361  133282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-901295" [f5ae0336-bb5d-47ef-94f4-bfd4674adc8e] Running
	I1210 01:14:05.256366  133282 system_pods.go:61] "metrics-server-6867b74b74-rlg4g" [9aae955e-136b-4dbb-a5a5-f7490309bf4e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:05.256376  133282 system_pods.go:61] "storage-provisioner" [06a31677-c5d7-4380-80d3-ec80b787f570] Running
	I1210 01:14:05.256383  133282 system_pods.go:74] duration metric: took 6.186387ms to wait for pod list to return data ...
	I1210 01:14:05.256391  133282 default_sa.go:34] waiting for default service account to be created ...
	I1210 01:14:05.258701  133282 default_sa.go:45] found service account: "default"
	I1210 01:14:05.258720  133282 default_sa.go:55] duration metric: took 2.322746ms for default service account to be created ...
	I1210 01:14:05.258726  133282 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 01:14:05.262756  133282 system_pods.go:86] 9 kube-system pods found
	I1210 01:14:05.262776  133282 system_pods.go:89] "coredns-7c65d6cfc9-4snjr" [ee9574b0-7c13-4fd0-b268-47bef0687b7c] Running
	I1210 01:14:05.262781  133282 system_pods.go:89] "coredns-7c65d6cfc9-wr22x" [51e6d58d-7a5a-4739-94de-c53a8c8247ca] Running
	I1210 01:14:05.262785  133282 system_pods.go:89] "etcd-default-k8s-diff-port-901295" [3e38ffc7-a02d-4449-a166-29eca58e8545] Running
	I1210 01:14:05.262791  133282 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-901295" [868a8472-8c93-4376-98c9-4d2bada6bfc0] Running
	I1210 01:14:05.262795  133282 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-901295" [730047f5-2dbe-4a89-858a-33e2cc0f52eb] Running
	I1210 01:14:05.262799  133282 system_pods.go:89] "kube-proxy-mcrmk" [ffc0f612-5484-46b4-9515-41e0a981287f] Running
	I1210 01:14:05.262802  133282 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-901295" [f5ae0336-bb5d-47ef-94f4-bfd4674adc8e] Running
	I1210 01:14:05.262808  133282 system_pods.go:89] "metrics-server-6867b74b74-rlg4g" [9aae955e-136b-4dbb-a5a5-f7490309bf4e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:05.262812  133282 system_pods.go:89] "storage-provisioner" [06a31677-c5d7-4380-80d3-ec80b787f570] Running
	I1210 01:14:05.262821  133282 system_pods.go:126] duration metric: took 4.090244ms to wait for k8s-apps to be running ...
	I1210 01:14:05.262827  133282 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 01:14:05.262881  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:14:05.275937  133282 system_svc.go:56] duration metric: took 13.102664ms WaitForService to wait for kubelet
	I1210 01:14:05.275962  133282 kubeadm.go:582] duration metric: took 10.323025026s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:14:05.275984  133282 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:14:05.278184  133282 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:14:05.278204  133282 node_conditions.go:123] node cpu capacity is 2
	I1210 01:14:05.278217  133282 node_conditions.go:105] duration metric: took 2.226803ms to run NodePressure ...
	I1210 01:14:05.278230  133282 start.go:241] waiting for startup goroutines ...
	I1210 01:14:05.278239  133282 start.go:246] waiting for cluster config update ...
	I1210 01:14:05.278249  133282 start.go:255] writing updated cluster config ...
	I1210 01:14:05.278553  133282 ssh_runner.go:195] Run: rm -f paused
	I1210 01:14:05.326078  133282 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 01:14:05.327902  133282 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-901295" cluster and "default" namespace by default
	I1210 01:14:04.852302  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:14:04.852558  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:14:44.854749  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:14:44.854980  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:14:44.854992  133241 kubeadm.go:310] 
	I1210 01:14:44.855044  133241 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 01:14:44.855104  133241 kubeadm.go:310] 		timed out waiting for the condition
	I1210 01:14:44.855115  133241 kubeadm.go:310] 
	I1210 01:14:44.855162  133241 kubeadm.go:310] 	This error is likely caused by:
	I1210 01:14:44.855217  133241 kubeadm.go:310] 		- The kubelet is not running
	I1210 01:14:44.855363  133241 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 01:14:44.855380  133241 kubeadm.go:310] 
	I1210 01:14:44.855514  133241 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 01:14:44.855565  133241 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 01:14:44.855615  133241 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 01:14:44.855625  133241 kubeadm.go:310] 
	I1210 01:14:44.855796  133241 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 01:14:44.855943  133241 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 01:14:44.855955  133241 kubeadm.go:310] 
	I1210 01:14:44.856139  133241 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 01:14:44.856299  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 01:14:44.856402  133241 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 01:14:44.856500  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 01:14:44.856525  133241 kubeadm.go:310] 
	I1210 01:14:44.856764  133241 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:14:44.856891  133241 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 01:14:44.856987  133241 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1210 01:14:44.857195  133241 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 01:14:44.857249  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:14:45.319104  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:14:45.333243  133241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:14:45.342637  133241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:14:45.342653  133241 kubeadm.go:157] found existing configuration files:
	
	I1210 01:14:45.342696  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:14:45.351179  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:14:45.351227  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:14:45.359836  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:14:45.368986  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:14:45.369041  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:14:45.378166  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:14:45.387734  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:14:45.387781  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:14:45.397866  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:14:45.406757  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:14:45.406794  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:14:45.416506  133241 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:14:45.484342  133241 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 01:14:45.484462  133241 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:14:45.624435  133241 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:14:45.624583  133241 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:14:45.624732  133241 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 01:14:45.800410  133241 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:14:45.802184  133241 out.go:235]   - Generating certificates and keys ...
	I1210 01:14:45.802296  133241 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:14:45.802393  133241 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:14:45.802504  133241 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:14:45.802601  133241 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:14:45.802707  133241 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:14:45.802780  133241 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:14:45.802867  133241 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:14:45.803320  133241 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:14:45.804003  133241 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:14:45.804623  133241 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:14:45.804904  133241 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:14:45.804997  133241 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:14:45.989500  133241 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:14:46.228462  133241 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:14:46.274395  133241 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:14:46.765291  133241 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:14:46.784318  133241 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:14:46.785620  133241 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:14:46.785694  133241 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:14:46.915963  133241 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:14:46.917607  133241 out.go:235]   - Booting up control plane ...
	I1210 01:14:46.917714  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:14:46.924564  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:14:46.925924  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:14:46.926912  133241 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:14:46.929973  133241 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 01:15:26.932207  133241 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 01:15:26.932539  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:15:26.932718  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:15:31.933200  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:15:31.933463  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:15:41.934297  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:15:41.934592  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:16:01.935227  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:16:01.935409  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:16:41.934005  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:16:41.934329  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:16:41.934361  133241 kubeadm.go:310] 
	I1210 01:16:41.934433  133241 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 01:16:41.934492  133241 kubeadm.go:310] 		timed out waiting for the condition
	I1210 01:16:41.934500  133241 kubeadm.go:310] 
	I1210 01:16:41.934550  133241 kubeadm.go:310] 	This error is likely caused by:
	I1210 01:16:41.934610  133241 kubeadm.go:310] 		- The kubelet is not running
	I1210 01:16:41.934768  133241 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 01:16:41.934791  133241 kubeadm.go:310] 
	I1210 01:16:41.934915  133241 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 01:16:41.934971  133241 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 01:16:41.935024  133241 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 01:16:41.935033  133241 kubeadm.go:310] 
	I1210 01:16:41.935184  133241 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 01:16:41.935327  133241 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 01:16:41.935346  133241 kubeadm.go:310] 
	I1210 01:16:41.935485  133241 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 01:16:41.935600  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 01:16:41.935720  133241 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 01:16:41.935818  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 01:16:41.935828  133241 kubeadm.go:310] 
	I1210 01:16:41.936518  133241 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:16:41.936630  133241 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 01:16:41.936756  133241 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1210 01:16:41.936849  133241 kubeadm.go:394] duration metric: took 7m57.690847315s to StartCluster
	I1210 01:16:41.936924  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:16:41.936994  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:16:41.979911  133241 cri.go:89] found id: ""
	I1210 01:16:41.979944  133241 logs.go:282] 0 containers: []
	W1210 01:16:41.979955  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:16:41.979964  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:16:41.980037  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:16:42.018336  133241 cri.go:89] found id: ""
	I1210 01:16:42.018366  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.018378  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:16:42.018385  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:16:42.018461  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:16:42.050036  133241 cri.go:89] found id: ""
	I1210 01:16:42.050065  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.050074  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:16:42.050080  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:16:42.050139  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:16:42.083023  133241 cri.go:89] found id: ""
	I1210 01:16:42.083051  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.083063  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:16:42.083072  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:16:42.083131  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:16:42.117900  133241 cri.go:89] found id: ""
	I1210 01:16:42.117921  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.117930  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:16:42.117936  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:16:42.117982  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:16:42.150009  133241 cri.go:89] found id: ""
	I1210 01:16:42.150041  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.150054  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:16:42.150063  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:16:42.150116  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:16:42.182606  133241 cri.go:89] found id: ""
	I1210 01:16:42.182632  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.182643  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:16:42.182650  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:16:42.182712  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:16:42.223456  133241 cri.go:89] found id: ""
	I1210 01:16:42.223486  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.223496  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:16:42.223507  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:16:42.223522  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:16:42.287081  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:16:42.287118  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:16:42.308277  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:16:42.308315  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:16:42.401928  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:16:42.401960  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:16:42.401977  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:16:42.515786  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:16:42.515829  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 01:16:42.551865  133241 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1210 01:16:42.551924  133241 out.go:270] * 
	W1210 01:16:42.552001  133241 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 01:16:42.552019  133241 out.go:270] * 
	W1210 01:16:42.552906  133241 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 01:16:42.556458  133241 out.go:201] 
	W1210 01:16:42.557556  133241 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 01:16:42.557619  133241 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 01:16:42.557649  133241 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 01:16:42.559020  133241 out.go:201] 
	
	
	==> CRI-O <==
	Dec 10 01:23:05 no-preload-584179 crio[713]: time="2024-12-10 01:23:05.458989193Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793785458968549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c59ed6de-a5a3-4ff8-87b7-9e82ed8af069 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:23:05 no-preload-584179 crio[713]: time="2024-12-10 01:23:05.459535297Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0b8d4bde-5928-4ec2-8104-2384f8aebf1e name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:23:05 no-preload-584179 crio[713]: time="2024-12-10 01:23:05.459586716Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0b8d4bde-5928-4ec2-8104-2384f8aebf1e name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:23:05 no-preload-584179 crio[713]: time="2024-12-10 01:23:05.459765172Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6,PodSandboxId:43f1bc8c241bb7947b7902649f169fdb013bf7935b68fafb167b68ea59994060,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733793010215594431,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31180637-f48e-4dda-8ec3-56155bb300cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d865adaba29cda491dbb5cec02f6ea7f225383d7a4064ab8aa9807f38b5533e1,PodSandboxId:6160e4478c9ac914e0e8c54522dc81cb0900a2376982c7d8a6ffcd0cd79a295e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733792989265657531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6fc0c1f-b6a7-40ac-9e10-2a68f5b02115,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04,PodSandboxId:d191d9273f6a401b5a2eec9e6d63de064eef42f3d7669f3c58b9e89f95b40d87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733792987075240511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hhsm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dddb227a-7c16-4acd-be5f-1ab38b78129c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0,PodSandboxId:43f1bc8c241bb7947b7902649f169fdb013bf7935b68fafb167b68ea59994060,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733792979506569749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
1180637-f48e-4dda-8ec3-56155bb300cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77,PodSandboxId:bd89636873ac2a97f98a6cd383631fe3d947073f2f78eb55a13da90f04b3e9fd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733792979437640962,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xcjs2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6cf5b1-3ea9-4868-874d-61e262cca0
c5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692,PodSandboxId:8dc1b0cbaf251c9c2fa854cf86837997874f49fcf8e9da14d23bc4993cea75a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733792974645290830,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90467e946d30cd9fb80657e65b9e5082,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490,PodSandboxId:7a59f1561e3291477964a352e900b0cc99d32b65de5dee19de981bed42f907c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733792974639690387,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd3d7c542523abf822e63f6e3439952a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f,PodSandboxId:5e11ae93148941b0b34ca918752fdeb0aba213415416aabcd43583516fa74ab9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733792974628487120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1ea1d24e6f8505a1013a0d087fdda56,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c,PodSandboxId:6767f178755ebed93772ac822c14663c96ae3f0505621fe68b357e4a85fe031b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733792974617550855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4044cc8e6b1094ccd3d98e2ee8467661,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0b8d4bde-5928-4ec2-8104-2384f8aebf1e name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:23:05 no-preload-584179 crio[713]: time="2024-12-10 01:23:05.495385001Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=941bbe5e-7842-4b1b-bace-3d8c15119ca5 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:23:05 no-preload-584179 crio[713]: time="2024-12-10 01:23:05.495462766Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=941bbe5e-7842-4b1b-bace-3d8c15119ca5 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:23:05 no-preload-584179 crio[713]: time="2024-12-10 01:23:05.496348262Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=866ba0d6-019d-42c5-bbcb-50570ff47c26 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:23:05 no-preload-584179 crio[713]: time="2024-12-10 01:23:05.496680017Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793785496656647,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=866ba0d6-019d-42c5-bbcb-50570ff47c26 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:23:05 no-preload-584179 crio[713]: time="2024-12-10 01:23:05.497134118Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=30ab9da9-8c8c-4351-8158-5f5b6f9fa541 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:23:05 no-preload-584179 crio[713]: time="2024-12-10 01:23:05.497185560Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=30ab9da9-8c8c-4351-8158-5f5b6f9fa541 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:23:05 no-preload-584179 crio[713]: time="2024-12-10 01:23:05.497392733Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6,PodSandboxId:43f1bc8c241bb7947b7902649f169fdb013bf7935b68fafb167b68ea59994060,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733793010215594431,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31180637-f48e-4dda-8ec3-56155bb300cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d865adaba29cda491dbb5cec02f6ea7f225383d7a4064ab8aa9807f38b5533e1,PodSandboxId:6160e4478c9ac914e0e8c54522dc81cb0900a2376982c7d8a6ffcd0cd79a295e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733792989265657531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6fc0c1f-b6a7-40ac-9e10-2a68f5b02115,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04,PodSandboxId:d191d9273f6a401b5a2eec9e6d63de064eef42f3d7669f3c58b9e89f95b40d87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733792987075240511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hhsm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dddb227a-7c16-4acd-be5f-1ab38b78129c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0,PodSandboxId:43f1bc8c241bb7947b7902649f169fdb013bf7935b68fafb167b68ea59994060,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733792979506569749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
1180637-f48e-4dda-8ec3-56155bb300cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77,PodSandboxId:bd89636873ac2a97f98a6cd383631fe3d947073f2f78eb55a13da90f04b3e9fd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733792979437640962,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xcjs2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6cf5b1-3ea9-4868-874d-61e262cca0
c5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692,PodSandboxId:8dc1b0cbaf251c9c2fa854cf86837997874f49fcf8e9da14d23bc4993cea75a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733792974645290830,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90467e946d30cd9fb80657e65b9e5082,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490,PodSandboxId:7a59f1561e3291477964a352e900b0cc99d32b65de5dee19de981bed42f907c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733792974639690387,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd3d7c542523abf822e63f6e3439952a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f,PodSandboxId:5e11ae93148941b0b34ca918752fdeb0aba213415416aabcd43583516fa74ab9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733792974628487120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1ea1d24e6f8505a1013a0d087fdda56,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c,PodSandboxId:6767f178755ebed93772ac822c14663c96ae3f0505621fe68b357e4a85fe031b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733792974617550855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4044cc8e6b1094ccd3d98e2ee8467661,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=30ab9da9-8c8c-4351-8158-5f5b6f9fa541 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:23:05 no-preload-584179 crio[713]: time="2024-12-10 01:23:05.531070571Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7064f230-8bd9-485d-b152-7c02d07828da name=/runtime.v1.RuntimeService/Version
	Dec 10 01:23:05 no-preload-584179 crio[713]: time="2024-12-10 01:23:05.531222049Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7064f230-8bd9-485d-b152-7c02d07828da name=/runtime.v1.RuntimeService/Version
	Dec 10 01:23:05 no-preload-584179 crio[713]: time="2024-12-10 01:23:05.532071582Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=26585632-a4d7-4ef3-9a3f-28eebfa3f23f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:23:05 no-preload-584179 crio[713]: time="2024-12-10 01:23:05.532385195Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793785532361687,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=26585632-a4d7-4ef3-9a3f-28eebfa3f23f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:23:05 no-preload-584179 crio[713]: time="2024-12-10 01:23:05.532901451Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29659071-92c1-4dca-ae9d-fdd510d85f13 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:23:05 no-preload-584179 crio[713]: time="2024-12-10 01:23:05.532969392Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29659071-92c1-4dca-ae9d-fdd510d85f13 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:23:05 no-preload-584179 crio[713]: time="2024-12-10 01:23:05.533233167Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6,PodSandboxId:43f1bc8c241bb7947b7902649f169fdb013bf7935b68fafb167b68ea59994060,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733793010215594431,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31180637-f48e-4dda-8ec3-56155bb300cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d865adaba29cda491dbb5cec02f6ea7f225383d7a4064ab8aa9807f38b5533e1,PodSandboxId:6160e4478c9ac914e0e8c54522dc81cb0900a2376982c7d8a6ffcd0cd79a295e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733792989265657531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6fc0c1f-b6a7-40ac-9e10-2a68f5b02115,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04,PodSandboxId:d191d9273f6a401b5a2eec9e6d63de064eef42f3d7669f3c58b9e89f95b40d87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733792987075240511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hhsm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dddb227a-7c16-4acd-be5f-1ab38b78129c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0,PodSandboxId:43f1bc8c241bb7947b7902649f169fdb013bf7935b68fafb167b68ea59994060,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733792979506569749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
1180637-f48e-4dda-8ec3-56155bb300cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77,PodSandboxId:bd89636873ac2a97f98a6cd383631fe3d947073f2f78eb55a13da90f04b3e9fd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733792979437640962,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xcjs2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6cf5b1-3ea9-4868-874d-61e262cca0
c5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692,PodSandboxId:8dc1b0cbaf251c9c2fa854cf86837997874f49fcf8e9da14d23bc4993cea75a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733792974645290830,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90467e946d30cd9fb80657e65b9e5082,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490,PodSandboxId:7a59f1561e3291477964a352e900b0cc99d32b65de5dee19de981bed42f907c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733792974639690387,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd3d7c542523abf822e63f6e3439952a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f,PodSandboxId:5e11ae93148941b0b34ca918752fdeb0aba213415416aabcd43583516fa74ab9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733792974628487120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1ea1d24e6f8505a1013a0d087fdda56,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c,PodSandboxId:6767f178755ebed93772ac822c14663c96ae3f0505621fe68b357e4a85fe031b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733792974617550855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4044cc8e6b1094ccd3d98e2ee8467661,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29659071-92c1-4dca-ae9d-fdd510d85f13 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:23:05 no-preload-584179 crio[713]: time="2024-12-10 01:23:05.562181185Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7e0a97c7-f2ea-4748-acf8-02e874b752d2 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:23:05 no-preload-584179 crio[713]: time="2024-12-10 01:23:05.562249007Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7e0a97c7-f2ea-4748-acf8-02e874b752d2 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:23:05 no-preload-584179 crio[713]: time="2024-12-10 01:23:05.562970462Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9966b74-d6c2-4f35-8bf3-826c465ed8d7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:23:05 no-preload-584179 crio[713]: time="2024-12-10 01:23:05.563365715Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793785563343936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9966b74-d6c2-4f35-8bf3-826c465ed8d7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:23:05 no-preload-584179 crio[713]: time="2024-12-10 01:23:05.563833699Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26efe0f1-6297-4528-9a16-4d6e4a7a31a2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:23:05 no-preload-584179 crio[713]: time="2024-12-10 01:23:05.563893037Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26efe0f1-6297-4528-9a16-4d6e4a7a31a2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:23:05 no-preload-584179 crio[713]: time="2024-12-10 01:23:05.564307897Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6,PodSandboxId:43f1bc8c241bb7947b7902649f169fdb013bf7935b68fafb167b68ea59994060,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733793010215594431,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31180637-f48e-4dda-8ec3-56155bb300cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d865adaba29cda491dbb5cec02f6ea7f225383d7a4064ab8aa9807f38b5533e1,PodSandboxId:6160e4478c9ac914e0e8c54522dc81cb0900a2376982c7d8a6ffcd0cd79a295e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733792989265657531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6fc0c1f-b6a7-40ac-9e10-2a68f5b02115,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04,PodSandboxId:d191d9273f6a401b5a2eec9e6d63de064eef42f3d7669f3c58b9e89f95b40d87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733792987075240511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hhsm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dddb227a-7c16-4acd-be5f-1ab38b78129c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0,PodSandboxId:43f1bc8c241bb7947b7902649f169fdb013bf7935b68fafb167b68ea59994060,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733792979506569749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
1180637-f48e-4dda-8ec3-56155bb300cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77,PodSandboxId:bd89636873ac2a97f98a6cd383631fe3d947073f2f78eb55a13da90f04b3e9fd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733792979437640962,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xcjs2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6cf5b1-3ea9-4868-874d-61e262cca0
c5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692,PodSandboxId:8dc1b0cbaf251c9c2fa854cf86837997874f49fcf8e9da14d23bc4993cea75a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733792974645290830,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90467e946d30cd9fb80657e65b9e5082,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490,PodSandboxId:7a59f1561e3291477964a352e900b0cc99d32b65de5dee19de981bed42f907c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733792974639690387,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd3d7c542523abf822e63f6e3439952a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f,PodSandboxId:5e11ae93148941b0b34ca918752fdeb0aba213415416aabcd43583516fa74ab9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733792974628487120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1ea1d24e6f8505a1013a0d087fdda56,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c,PodSandboxId:6767f178755ebed93772ac822c14663c96ae3f0505621fe68b357e4a85fe031b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733792974617550855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4044cc8e6b1094ccd3d98e2ee8467661,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26efe0f1-6297-4528-9a16-4d6e4a7a31a2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8ccea68bfe8c4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   43f1bc8c241bb       storage-provisioner
	d865adaba29cd       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   6160e4478c9ac       busybox
	7d559bbd79cd2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   d191d9273f6a4       coredns-7c65d6cfc9-hhsm5
	abb7462dd698b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   43f1bc8c241bb       storage-provisioner
	eef419f8befc6       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      13 minutes ago      Running             kube-proxy                1                   bd89636873ac2       kube-proxy-xcjs2
	c9c3cf60e1de6       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      13 minutes ago      Running             kube-scheduler            1                   8dc1b0cbaf251       kube-scheduler-no-preload-584179
	bad358581c44d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   7a59f1561e329       etcd-no-preload-584179
	7147c6004e066       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      13 minutes ago      Running             kube-controller-manager   1                   5e11ae9314894       kube-controller-manager-no-preload-584179
	0e94f76a99534       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      13 minutes ago      Running             kube-apiserver            1                   6767f178755eb       kube-apiserver-no-preload-584179
	
	
	==> coredns [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:53106 - 35035 "HINFO IN 457833088587050374.564137791752783472. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.021462694s
	
	
	==> describe nodes <==
	Name:               no-preload-584179
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-584179
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=no-preload-584179
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_10T00_59_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:59:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-584179
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 01:23:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 01:20:19 +0000   Tue, 10 Dec 2024 00:59:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 01:20:19 +0000   Tue, 10 Dec 2024 00:59:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 01:20:19 +0000   Tue, 10 Dec 2024 00:59:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 01:20:19 +0000   Tue, 10 Dec 2024 01:09:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.169
	  Hostname:    no-preload-584179
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 60d05cb18de2438e91da99c2b762f33f
	  System UUID:                60d05cb1-8de2-438e-91da-99c2b762f33f
	  Boot ID:                    8f8d21a7-9800-49be-b5b0-669683a98481
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-7c65d6cfc9-hhsm5                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	  kube-system                 etcd-no-preload-584179                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         23m
	  kube-system                 kube-apiserver-no-preload-584179             250m (12%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-no-preload-584179    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-xcjs2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-no-preload-584179             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 metrics-server-6867b74b74-lwgxd              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node no-preload-584179 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node no-preload-584179 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node no-preload-584179 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    23m                kubelet          Node no-preload-584179 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23m                kubelet          Node no-preload-584179 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     23m                kubelet          Node no-preload-584179 status is now: NodeHasSufficientPID
	  Normal  NodeReady                23m                kubelet          Node no-preload-584179 status is now: NodeReady
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           23m                node-controller  Node no-preload-584179 event: Registered Node no-preload-584179 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-584179 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-584179 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-584179 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-584179 event: Registered Node no-preload-584179 in Controller
	
	
	==> dmesg <==
	[Dec10 01:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053119] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042338] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Dec10 01:09] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.034606] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.581147] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.320088] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.053949] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.049066] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.200738] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.109829] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.250556] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[ +15.002000] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.060801] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.949765] systemd-fstab-generator[1425]: Ignoring "noauto" option for root device
	[  +4.394534] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.197907] systemd-fstab-generator[2061]: Ignoring "noauto" option for root device
	[  +3.671201] kauditd_printk_skb: 61 callbacks suppressed
	[Dec10 01:10] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490] <==
	{"level":"info","ts":"2024-12-10T01:09:35.212312Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T01:09:35.218025Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-10T01:09:35.220401Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f8345cbe35aa418e","initial-advertise-peer-urls":["https://192.168.50.169:2380"],"listen-peer-urls":["https://192.168.50.169:2380"],"advertise-client-urls":["https://192.168.50.169:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.169:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-10T01:09:35.220453Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-10T01:09:35.220632Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.169:2380"}
	{"level":"info","ts":"2024-12-10T01:09:35.220662Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.169:2380"}
	{"level":"info","ts":"2024-12-10T01:09:37.045257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8345cbe35aa418e is starting a new election at term 2"}
	{"level":"info","ts":"2024-12-10T01:09:37.045372Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8345cbe35aa418e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-10T01:09:37.045407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8345cbe35aa418e received MsgPreVoteResp from f8345cbe35aa418e at term 2"}
	{"level":"info","ts":"2024-12-10T01:09:37.045421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8345cbe35aa418e became candidate at term 3"}
	{"level":"info","ts":"2024-12-10T01:09:37.045427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8345cbe35aa418e received MsgVoteResp from f8345cbe35aa418e at term 3"}
	{"level":"info","ts":"2024-12-10T01:09:37.045435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8345cbe35aa418e became leader at term 3"}
	{"level":"info","ts":"2024-12-10T01:09:37.045449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f8345cbe35aa418e elected leader f8345cbe35aa418e at term 3"}
	{"level":"info","ts":"2024-12-10T01:09:37.085177Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-10T01:09:37.086168Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-10T01:09:37.086846Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-10T01:09:37.089766Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f8345cbe35aa418e","local-member-attributes":"{Name:no-preload-584179 ClientURLs:[https://192.168.50.169:2379]}","request-path":"/0/members/f8345cbe35aa418e/attributes","cluster-id":"8f5c98dd1b14dce8","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-10T01:09:37.090026Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-10T01:09:37.091225Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-10T01:09:37.091950Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.169:2379"}
	{"level":"info","ts":"2024-12-10T01:09:37.102852Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-10T01:09:37.102895Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-10T01:19:37.125183Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":878}
	{"level":"info","ts":"2024-12-10T01:19:37.134648Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":878,"took":"9.068924ms","hash":2104739600,"current-db-size-bytes":2666496,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2666496,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-12-10T01:19:37.134703Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2104739600,"revision":878,"compact-revision":-1}
	
	
	==> kernel <==
	 01:23:05 up 14 min,  0 users,  load average: 0.00, 0.05, 0.07
	Linux no-preload-584179 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c] <==
	W1210 01:19:39.418858       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 01:19:39.419001       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 01:19:39.420109       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 01:19:39.420128       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 01:20:39.421194       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 01:20:39.421279       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1210 01:20:39.421317       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 01:20:39.421340       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 01:20:39.422407       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 01:20:39.422456       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 01:22:39.422655       1 handler_proxy.go:99] no RequestInfo found in the context
	W1210 01:22:39.422669       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 01:22:39.423156       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1210 01:22:39.423231       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 01:22:39.424903       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 01:22:39.425087       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f] <==
	E1210 01:17:41.979024       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:17:42.414304       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:18:11.985486       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:18:12.423382       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:18:41.992393       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:18:42.430770       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:19:11.998839       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:19:12.437655       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:19:42.005650       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:19:42.444281       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:20:12.014953       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:20:12.451607       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 01:20:19.081408       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-584179"
	E1210 01:20:42.023448       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:20:42.032102       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="263.899µs"
	I1210 01:20:42.459599       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 01:20:53.027314       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="132.532µs"
	E1210 01:21:12.029172       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:21:12.466531       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:21:42.035025       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:21:42.473470       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:22:12.041108       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:22:12.480242       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:22:42.046536       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:22:42.487839       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1210 01:09:39.764829       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1210 01:09:39.778996       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.169"]
	E1210 01:09:39.779177       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 01:09:39.861413       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1210 01:09:39.861453       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 01:09:39.861486       1 server_linux.go:169] "Using iptables Proxier"
	I1210 01:09:39.866742       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 01:09:39.866995       1 server.go:483] "Version info" version="v1.31.2"
	I1210 01:09:39.867021       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 01:09:39.868776       1 config.go:199] "Starting service config controller"
	I1210 01:09:39.868817       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1210 01:09:39.868880       1 config.go:105] "Starting endpoint slice config controller"
	I1210 01:09:39.868898       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1210 01:09:39.869938       1 config.go:328] "Starting node config controller"
	I1210 01:09:39.869966       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1210 01:09:39.969482       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1210 01:09:39.969535       1 shared_informer.go:320] Caches are synced for service config
	I1210 01:09:39.970750       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692] <==
	W1210 01:09:38.387627       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1210 01:09:38.387706       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 01:09:38.387906       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1210 01:09:38.387942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 01:09:38.388111       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1210 01:09:38.388216       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1210 01:09:38.388239       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1210 01:09:38.388302       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:09:38.388395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1210 01:09:38.388448       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:09:38.388536       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1210 01:09:38.388624       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1210 01:09:38.388939       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1210 01:09:38.389012       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1210 01:09:38.391485       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1210 01:09:38.391598       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:09:38.391717       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1210 01:09:38.391795       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 01:09:38.391904       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1210 01:09:38.391936       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:09:38.392115       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1210 01:09:38.392193       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:09:38.394124       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1210 01:09:38.394161       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1210 01:09:39.473379       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 10 01:21:54 no-preload-584179 kubelet[1432]: E1210 01:21:54.212263    1432 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793714211314974,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:01 no-preload-584179 kubelet[1432]: E1210 01:22:01.013104    1432 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lwgxd" podUID="0e7f1063-8508-4f5b-b8ff-bbd387a53919"
	Dec 10 01:22:04 no-preload-584179 kubelet[1432]: E1210 01:22:04.214230    1432 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793724213765926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:04 no-preload-584179 kubelet[1432]: E1210 01:22:04.214541    1432 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793724213765926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:14 no-preload-584179 kubelet[1432]: E1210 01:22:14.215940    1432 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793734215447588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:14 no-preload-584179 kubelet[1432]: E1210 01:22:14.216251    1432 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793734215447588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:16 no-preload-584179 kubelet[1432]: E1210 01:22:16.012416    1432 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lwgxd" podUID="0e7f1063-8508-4f5b-b8ff-bbd387a53919"
	Dec 10 01:22:24 no-preload-584179 kubelet[1432]: E1210 01:22:24.218240    1432 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793744217936774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:24 no-preload-584179 kubelet[1432]: E1210 01:22:24.218497    1432 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793744217936774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:31 no-preload-584179 kubelet[1432]: E1210 01:22:31.013660    1432 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lwgxd" podUID="0e7f1063-8508-4f5b-b8ff-bbd387a53919"
	Dec 10 01:22:34 no-preload-584179 kubelet[1432]: E1210 01:22:34.052315    1432 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 10 01:22:34 no-preload-584179 kubelet[1432]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 10 01:22:34 no-preload-584179 kubelet[1432]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 10 01:22:34 no-preload-584179 kubelet[1432]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 10 01:22:34 no-preload-584179 kubelet[1432]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 10 01:22:34 no-preload-584179 kubelet[1432]: E1210 01:22:34.220089    1432 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793754219739181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:34 no-preload-584179 kubelet[1432]: E1210 01:22:34.220116    1432 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793754219739181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:44 no-preload-584179 kubelet[1432]: E1210 01:22:44.014736    1432 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lwgxd" podUID="0e7f1063-8508-4f5b-b8ff-bbd387a53919"
	Dec 10 01:22:44 no-preload-584179 kubelet[1432]: E1210 01:22:44.221353    1432 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793764220930943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:44 no-preload-584179 kubelet[1432]: E1210 01:22:44.221422    1432 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793764220930943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:54 no-preload-584179 kubelet[1432]: E1210 01:22:54.222743    1432 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793774222335548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:54 no-preload-584179 kubelet[1432]: E1210 01:22:54.222787    1432 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793774222335548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:55 no-preload-584179 kubelet[1432]: E1210 01:22:55.012938    1432 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lwgxd" podUID="0e7f1063-8508-4f5b-b8ff-bbd387a53919"
	Dec 10 01:23:04 no-preload-584179 kubelet[1432]: E1210 01:23:04.223880    1432 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793784223644493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:23:04 no-preload-584179 kubelet[1432]: E1210 01:23:04.223912    1432 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793784223644493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6] <==
	I1210 01:10:10.308605       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 01:10:10.319971       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 01:10:10.320146       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1210 01:10:27.717645       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 01:10:27.717930       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-584179_c88d5690-32b8-4f74-8f3b-f3bee45d3f11!
	I1210 01:10:27.718544       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6f33a069-539f-40b3-a154-c9bb954b4b41", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-584179_c88d5690-32b8-4f74-8f3b-f3bee45d3f11 became leader
	I1210 01:10:27.818913       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-584179_c88d5690-32b8-4f74-8f3b-f3bee45d3f11!
	
	
	==> storage-provisioner [abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0] <==
	I1210 01:09:39.658823       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 01:10:09.661878       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-584179 -n no-preload-584179
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-584179 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-lwgxd
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-584179 describe pod metrics-server-6867b74b74-lwgxd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-584179 describe pod metrics-server-6867b74b74-lwgxd: exit status 1 (69.204749ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-lwgxd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-584179 describe pod metrics-server-6867b74b74-lwgxd: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1210 01:15:09.289288   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:15:47.490824   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:16:32.362706   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-901295 -n default-k8s-diff-port-901295
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-10 01:23:05.868742207 +0000 UTC m=+5978.948380350
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-901295 -n default-k8s-diff-port-901295
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-901295 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-901295 logs -n 25: (2.403267855s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-094470                              | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-options-086522                                 | cert-options-086522          | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 00:58 UTC |
	| start   | -p no-preload-584179                                   | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 01:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-481624                           | kubernetes-upgrade-481624    | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 00:58 UTC |
	| start   | -p embed-certs-274758                                  | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 01:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-290541                              | cert-expiration-290541       | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-290541                              | cert-expiration-290541       | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-371895 | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | disable-driver-mounts-371895                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:02 UTC |
	|         | default-k8s-diff-port-901295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-584179             | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-584179                                   | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-274758            | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-274758                                  | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-901295  | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:02 UTC | 10 Dec 24 01:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:02 UTC |                     |
	|         | default-k8s-diff-port-901295                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-094470        | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-584179                  | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-274758                 | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-584179                                   | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC | 10 Dec 24 01:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-274758                                  | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC | 10 Dec 24 01:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-901295       | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-094470                              | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC | 10 Dec 24 01:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-094470             | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC | 10 Dec 24 01:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-094470                              | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC | 10 Dec 24 01:14 UTC |
	|         | default-k8s-diff-port-901295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/10 01:04:42
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 01:04:42.604554  133282 out.go:345] Setting OutFile to fd 1 ...
	I1210 01:04:42.604645  133282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:04:42.604652  133282 out.go:358] Setting ErrFile to fd 2...
	I1210 01:04:42.604657  133282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:04:42.604818  133282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 01:04:42.605325  133282 out.go:352] Setting JSON to false
	I1210 01:04:42.606230  133282 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10034,"bootTime":1733782649,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 01:04:42.606360  133282 start.go:139] virtualization: kvm guest
	I1210 01:04:42.608505  133282 out.go:177] * [default-k8s-diff-port-901295] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 01:04:42.609651  133282 notify.go:220] Checking for updates...
	I1210 01:04:42.609661  133282 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 01:04:42.610866  133282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 01:04:42.611986  133282 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:04:42.613055  133282 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 01:04:42.614094  133282 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 01:04:42.615160  133282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 01:04:42.616546  133282 config.go:182] Loaded profile config "default-k8s-diff-port-901295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:04:42.616942  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:04:42.617000  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:04:42.631861  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39355
	I1210 01:04:42.632399  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:04:42.632966  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:04:42.632988  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:04:42.633389  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:04:42.633558  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:04:42.633822  133282 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 01:04:42.634105  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:04:42.634139  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:04:42.648371  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38389
	I1210 01:04:42.648775  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:04:42.649217  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:04:42.649238  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:04:42.649580  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:04:42.649752  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:04:42.680926  133282 out.go:177] * Using the kvm2 driver based on existing profile
	I1210 01:04:42.682339  133282 start.go:297] selected driver: kvm2
	I1210 01:04:42.682365  133282 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:04:42.682487  133282 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 01:04:42.683148  133282 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 01:04:42.683220  133282 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 01:04:42.697586  133282 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1210 01:04:42.697938  133282 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:04:42.697970  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:04:42.698011  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:04:42.698042  133282 start.go:340] cluster config:
	{Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:04:42.698139  133282 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 01:04:42.699685  133282 out.go:177] * Starting "default-k8s-diff-port-901295" primary control-plane node in "default-k8s-diff-port-901295" cluster
	I1210 01:04:39.721352  133241 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1210 01:04:39.721383  133241 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1210 01:04:39.721392  133241 cache.go:56] Caching tarball of preloaded images
	I1210 01:04:39.721455  133241 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 01:04:39.721464  133241 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1210 01:04:39.721545  133241 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/config.json ...
	I1210 01:04:39.721707  133241 start.go:360] acquireMachinesLock for old-k8s-version-094470: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 01:04:44.574793  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:04:42.700760  133282 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:04:42.700790  133282 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1210 01:04:42.700799  133282 cache.go:56] Caching tarball of preloaded images
	I1210 01:04:42.700867  133282 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 01:04:42.700878  133282 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 01:04:42.700976  133282 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/config.json ...
	I1210 01:04:42.701136  133282 start.go:360] acquireMachinesLock for default-k8s-diff-port-901295: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 01:04:50.654849  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:04:53.726828  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:04:59.806818  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:02.878819  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:08.958855  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:12.030796  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:18.110838  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:21.182849  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:27.262801  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:30.334793  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:36.414830  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:39.486794  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:45.566825  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:48.639043  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:54.718789  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:57.790796  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:03.870824  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:06.942805  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:13.023037  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:16.094961  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:22.174798  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:25.246892  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:31.326818  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:34.398846  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:40.478809  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:43.550800  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:49.630777  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:52.702808  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:58.783007  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:01.854776  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:07.934835  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:11.006837  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:17.086805  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:20.158819  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:26.238836  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:29.311060  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:35.390827  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:38.462976  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:44.542806  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:47.614800  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:53.694819  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:56.766790  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:59.770632  132693 start.go:364] duration metric: took 4m32.843409632s to acquireMachinesLock for "embed-certs-274758"
	I1210 01:07:59.770698  132693 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:07:59.770705  132693 fix.go:54] fixHost starting: 
	I1210 01:07:59.771174  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:07:59.771226  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:07:59.787289  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45723
	I1210 01:07:59.787787  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:07:59.788234  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:07:59.788258  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:07:59.788645  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:07:59.788824  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:07:59.788958  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:07:59.790595  132693 fix.go:112] recreateIfNeeded on embed-certs-274758: state=Stopped err=<nil>
	I1210 01:07:59.790631  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	W1210 01:07:59.790790  132693 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:07:59.792515  132693 out.go:177] * Restarting existing kvm2 VM for "embed-certs-274758" ...
	I1210 01:07:59.793607  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Start
	I1210 01:07:59.793771  132693 main.go:141] libmachine: (embed-certs-274758) Ensuring networks are active...
	I1210 01:07:59.794532  132693 main.go:141] libmachine: (embed-certs-274758) Ensuring network default is active
	I1210 01:07:59.794864  132693 main.go:141] libmachine: (embed-certs-274758) Ensuring network mk-embed-certs-274758 is active
	I1210 01:07:59.795317  132693 main.go:141] libmachine: (embed-certs-274758) Getting domain xml...
	I1210 01:07:59.796099  132693 main.go:141] libmachine: (embed-certs-274758) Creating domain...
	I1210 01:08:00.982632  132693 main.go:141] libmachine: (embed-certs-274758) Waiting to get IP...
	I1210 01:08:00.983591  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:00.984037  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:00.984077  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:00.984002  133990 retry.go:31] will retry after 285.753383ms: waiting for machine to come up
	I1210 01:08:01.272035  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:01.272490  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:01.272514  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:01.272423  133990 retry.go:31] will retry after 309.245833ms: waiting for machine to come up
	I1210 01:08:01.582873  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:01.583336  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:01.583382  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:01.583288  133990 retry.go:31] will retry after 451.016986ms: waiting for machine to come up
	I1210 01:07:59.768336  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:07:59.768370  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:07:59.768666  132605 buildroot.go:166] provisioning hostname "no-preload-584179"
	I1210 01:07:59.768702  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:07:59.768894  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:07:59.770491  132605 machine.go:96] duration metric: took 4m37.429107505s to provisionDockerMachine
	I1210 01:07:59.770535  132605 fix.go:56] duration metric: took 4m37.448303416s for fixHost
	I1210 01:07:59.770542  132605 start.go:83] releasing machines lock for "no-preload-584179", held for 4m37.448340626s
	W1210 01:07:59.770589  132605 start.go:714] error starting host: provision: host is not running
	W1210 01:07:59.770743  132605 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1210 01:07:59.770759  132605 start.go:729] Will try again in 5 seconds ...
	I1210 01:08:02.035970  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:02.036421  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:02.036443  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:02.036382  133990 retry.go:31] will retry after 408.436756ms: waiting for machine to come up
	I1210 01:08:02.445970  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:02.446515  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:02.446550  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:02.446445  133990 retry.go:31] will retry after 612.819219ms: waiting for machine to come up
	I1210 01:08:03.061377  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:03.061850  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:03.061879  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:03.061795  133990 retry.go:31] will retry after 867.345457ms: waiting for machine to come up
	I1210 01:08:03.930866  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:03.931316  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:03.931340  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:03.931259  133990 retry.go:31] will retry after 758.429736ms: waiting for machine to come up
	I1210 01:08:04.691061  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:04.691480  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:04.691511  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:04.691430  133990 retry.go:31] will retry after 1.278419765s: waiting for machine to come up
	I1210 01:08:05.972206  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:05.972645  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:05.972677  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:05.972596  133990 retry.go:31] will retry after 1.726404508s: waiting for machine to come up
	I1210 01:08:04.770968  132605 start.go:360] acquireMachinesLock for no-preload-584179: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 01:08:07.700170  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:07.700593  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:07.700615  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:07.700544  133990 retry.go:31] will retry after 2.286681333s: waiting for machine to come up
	I1210 01:08:09.989072  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:09.989424  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:09.989447  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:09.989383  133990 retry.go:31] will retry after 2.723565477s: waiting for machine to come up
	I1210 01:08:12.716204  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:12.716656  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:12.716680  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:12.716618  133990 retry.go:31] will retry after 3.619683155s: waiting for machine to come up
	I1210 01:08:16.338854  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.339271  132693 main.go:141] libmachine: (embed-certs-274758) Found IP for machine: 192.168.72.76
	I1210 01:08:16.339301  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has current primary IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.339306  132693 main.go:141] libmachine: (embed-certs-274758) Reserving static IP address...
	I1210 01:08:16.339656  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "embed-certs-274758", mac: "52:54:00:d3:3c:b1", ip: "192.168.72.76"} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.339683  132693 main.go:141] libmachine: (embed-certs-274758) DBG | skip adding static IP to network mk-embed-certs-274758 - found existing host DHCP lease matching {name: "embed-certs-274758", mac: "52:54:00:d3:3c:b1", ip: "192.168.72.76"}
	I1210 01:08:16.339695  132693 main.go:141] libmachine: (embed-certs-274758) Reserved static IP address: 192.168.72.76
	I1210 01:08:16.339703  132693 main.go:141] libmachine: (embed-certs-274758) Waiting for SSH to be available...
	I1210 01:08:16.339715  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Getting to WaitForSSH function...
	I1210 01:08:16.341531  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.341776  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.341804  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.341963  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Using SSH client type: external
	I1210 01:08:16.341995  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa (-rw-------)
	I1210 01:08:16.342030  132693 main.go:141] libmachine: (embed-certs-274758) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.76 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:08:16.342047  132693 main.go:141] libmachine: (embed-certs-274758) DBG | About to run SSH command:
	I1210 01:08:16.342061  132693 main.go:141] libmachine: (embed-certs-274758) DBG | exit 0
	I1210 01:08:16.465930  132693 main.go:141] libmachine: (embed-certs-274758) DBG | SSH cmd err, output: <nil>: 
	I1210 01:08:16.466310  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetConfigRaw
	I1210 01:08:16.466921  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:16.469152  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.469472  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.469501  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.469754  132693 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/config.json ...
	I1210 01:08:16.469962  132693 machine.go:93] provisionDockerMachine start ...
	I1210 01:08:16.469982  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:16.470197  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.472368  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.472740  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.472765  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.472888  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.473052  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.473222  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.473325  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.473500  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:16.473737  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:16.473752  132693 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:08:16.581932  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:08:16.581963  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetMachineName
	I1210 01:08:16.582183  132693 buildroot.go:166] provisioning hostname "embed-certs-274758"
	I1210 01:08:16.582213  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetMachineName
	I1210 01:08:16.582412  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.584799  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.585092  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.585124  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.585264  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.585415  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.585568  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.585701  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.585836  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:16.586010  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:16.586026  132693 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-274758 && echo "embed-certs-274758" | sudo tee /etc/hostname
	I1210 01:08:16.707226  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-274758
	
	I1210 01:08:16.707260  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.709905  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.710192  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.710223  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.710428  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.710632  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.710828  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.710957  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.711127  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:16.711339  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:16.711356  132693 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-274758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-274758/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-274758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:08:17.578801  133241 start.go:364] duration metric: took 3m37.857041189s to acquireMachinesLock for "old-k8s-version-094470"
	I1210 01:08:17.578868  133241 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:08:17.578876  133241 fix.go:54] fixHost starting: 
	I1210 01:08:17.579295  133241 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:08:17.579353  133241 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:08:17.595770  133241 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46811
	I1210 01:08:17.596141  133241 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:08:17.596669  133241 main.go:141] libmachine: Using API Version  1
	I1210 01:08:17.596693  133241 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:08:17.597084  133241 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:08:17.597263  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:17.597405  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetState
	I1210 01:08:17.598931  133241 fix.go:112] recreateIfNeeded on old-k8s-version-094470: state=Stopped err=<nil>
	I1210 01:08:17.598957  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	W1210 01:08:17.599124  133241 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:08:17.600962  133241 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-094470" ...
	I1210 01:08:16.831001  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:08:16.831032  132693 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:08:16.831063  132693 buildroot.go:174] setting up certificates
	I1210 01:08:16.831074  132693 provision.go:84] configureAuth start
	I1210 01:08:16.831084  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetMachineName
	I1210 01:08:16.831362  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:16.833916  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.834282  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.834318  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.834446  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.836770  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.837061  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.837083  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.837216  132693 provision.go:143] copyHostCerts
	I1210 01:08:16.837284  132693 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:08:16.837303  132693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:08:16.837357  132693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:08:16.837447  132693 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:08:16.837455  132693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:08:16.837478  132693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:08:16.837528  132693 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:08:16.837535  132693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:08:16.837554  132693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:08:16.837609  132693 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.embed-certs-274758 san=[127.0.0.1 192.168.72.76 embed-certs-274758 localhost minikube]
	I1210 01:08:16.953590  132693 provision.go:177] copyRemoteCerts
	I1210 01:08:16.953649  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:08:16.953676  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.956012  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.956347  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.956384  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.956544  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.956703  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.956828  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.956951  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.039674  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:08:17.061125  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1210 01:08:17.082062  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 01:08:17.102519  132693 provision.go:87] duration metric: took 271.416512ms to configureAuth
	I1210 01:08:17.102554  132693 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:08:17.102745  132693 config.go:182] Loaded profile config "embed-certs-274758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:08:17.102858  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.105469  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.105818  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.105849  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.105976  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.106169  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.106326  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.106468  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.106639  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:17.106804  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:17.106817  132693 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:08:17.339841  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:08:17.339873  132693 machine.go:96] duration metric: took 869.895063ms to provisionDockerMachine
	I1210 01:08:17.339888  132693 start.go:293] postStartSetup for "embed-certs-274758" (driver="kvm2")
	I1210 01:08:17.339902  132693 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:08:17.339921  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.340256  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:08:17.340295  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.342633  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.342947  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.342973  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.343127  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.343294  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.343441  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.343545  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.428245  132693 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:08:17.432486  132693 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:08:17.432507  132693 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:08:17.432568  132693 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:08:17.432650  132693 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:08:17.432756  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:08:17.441892  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:17.464515  132693 start.go:296] duration metric: took 124.610801ms for postStartSetup
	I1210 01:08:17.464558  132693 fix.go:56] duration metric: took 17.693851707s for fixHost
	I1210 01:08:17.464592  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.467173  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.467470  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.467494  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.467622  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.467829  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.467976  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.468111  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.468253  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:17.468418  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:17.468429  132693 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:08:17.578630  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792897.551711245
	
	I1210 01:08:17.578653  132693 fix.go:216] guest clock: 1733792897.551711245
	I1210 01:08:17.578662  132693 fix.go:229] Guest: 2024-12-10 01:08:17.551711245 +0000 UTC Remote: 2024-12-10 01:08:17.464575547 +0000 UTC m=+290.672639525 (delta=87.135698ms)
	I1210 01:08:17.578690  132693 fix.go:200] guest clock delta is within tolerance: 87.135698ms
	I1210 01:08:17.578697  132693 start.go:83] releasing machines lock for "embed-certs-274758", held for 17.808018239s
	I1210 01:08:17.578727  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.578978  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:17.581740  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.582079  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.582105  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.582272  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.582792  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.582970  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.583053  132693 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:08:17.583108  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.583173  132693 ssh_runner.go:195] Run: cat /version.json
	I1210 01:08:17.583203  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.585727  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586056  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586096  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.586121  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586268  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.586447  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.586496  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.586525  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586661  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.586665  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.586853  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.586851  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.587016  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.587145  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.689525  132693 ssh_runner.go:195] Run: systemctl --version
	I1210 01:08:17.696586  132693 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:08:17.838483  132693 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:08:17.844291  132693 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:08:17.844381  132693 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:08:17.858838  132693 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:08:17.858864  132693 start.go:495] detecting cgroup driver to use...
	I1210 01:08:17.858926  132693 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:08:17.875144  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:08:17.887694  132693 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:08:17.887750  132693 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:08:17.900263  132693 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:08:17.916462  132693 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:08:18.050837  132693 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:08:18.237065  132693 docker.go:233] disabling docker service ...
	I1210 01:08:18.237134  132693 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:08:18.254596  132693 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:08:18.267028  132693 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:08:18.384379  132693 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:08:18.511930  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:08:18.525729  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:08:18.544642  132693 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 01:08:18.544693  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.555569  132693 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:08:18.555629  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.565952  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.575954  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.589571  132693 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:08:18.604400  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.615079  132693 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.631811  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.641877  132693 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:08:18.651229  132693 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:08:18.651284  132693 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:08:18.663922  132693 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:08:18.673755  132693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:18.804115  132693 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:08:18.902371  132693 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:08:18.902453  132693 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:08:18.906806  132693 start.go:563] Will wait 60s for crictl version
	I1210 01:08:18.906876  132693 ssh_runner.go:195] Run: which crictl
	I1210 01:08:18.910409  132693 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:08:18.957196  132693 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:08:18.957293  132693 ssh_runner.go:195] Run: crio --version
	I1210 01:08:18.983326  132693 ssh_runner.go:195] Run: crio --version
	I1210 01:08:19.021374  132693 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 01:08:17.602512  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .Start
	I1210 01:08:17.602729  133241 main.go:141] libmachine: (old-k8s-version-094470) Ensuring networks are active...
	I1210 01:08:17.603418  133241 main.go:141] libmachine: (old-k8s-version-094470) Ensuring network default is active
	I1210 01:08:17.603788  133241 main.go:141] libmachine: (old-k8s-version-094470) Ensuring network mk-old-k8s-version-094470 is active
	I1210 01:08:17.604284  133241 main.go:141] libmachine: (old-k8s-version-094470) Getting domain xml...
	I1210 01:08:17.605020  133241 main.go:141] libmachine: (old-k8s-version-094470) Creating domain...
	I1210 01:08:18.869767  133241 main.go:141] libmachine: (old-k8s-version-094470) Waiting to get IP...
	I1210 01:08:18.870786  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:18.871226  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:18.871282  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:18.871190  134112 retry.go:31] will retry after 260.195661ms: waiting for machine to come up
	I1210 01:08:19.132624  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:19.133091  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:19.133113  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:19.133034  134112 retry.go:31] will retry after 241.852579ms: waiting for machine to come up
	I1210 01:08:19.376814  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:19.377485  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:19.377520  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:19.377420  134112 retry.go:31] will retry after 410.574957ms: waiting for machine to come up
	I1210 01:08:19.023096  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:19.026231  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:19.026697  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:19.026740  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:19.026981  132693 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1210 01:08:19.031042  132693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:19.043510  132693 kubeadm.go:883] updating cluster {Name:embed-certs-274758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-274758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:08:19.043679  132693 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:08:19.043747  132693 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:19.075804  132693 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 01:08:19.075875  132693 ssh_runner.go:195] Run: which lz4
	I1210 01:08:19.079498  132693 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 01:08:19.083365  132693 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 01:08:19.083394  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1210 01:08:20.282126  132693 crio.go:462] duration metric: took 1.202670831s to copy over tarball
	I1210 01:08:20.282224  132693 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 01:08:19.790282  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:19.790868  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:19.790898  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:19.790828  134112 retry.go:31] will retry after 535.183165ms: waiting for machine to come up
	I1210 01:08:20.327434  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:20.327936  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:20.327972  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:20.327862  134112 retry.go:31] will retry after 729.193633ms: waiting for machine to come up
	I1210 01:08:21.058815  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:21.059274  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:21.059302  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:21.059224  134112 retry.go:31] will retry after 578.788415ms: waiting for machine to come up
	I1210 01:08:21.640036  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:21.640572  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:21.640604  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:21.640523  134112 retry.go:31] will retry after 1.113559472s: waiting for machine to come up
	I1210 01:08:22.755259  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:22.755716  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:22.755741  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:22.755681  134112 retry.go:31] will retry after 940.416935ms: waiting for machine to come up
	I1210 01:08:23.698216  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:23.698652  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:23.698684  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:23.698608  134112 retry.go:31] will retry after 1.575038679s: waiting for machine to come up
	I1210 01:08:22.359701  132693 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.077440918s)
	I1210 01:08:22.359757  132693 crio.go:469] duration metric: took 2.077602088s to extract the tarball
	I1210 01:08:22.359770  132693 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 01:08:22.404915  132693 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:22.444497  132693 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 01:08:22.444531  132693 cache_images.go:84] Images are preloaded, skipping loading
	I1210 01:08:22.444543  132693 kubeadm.go:934] updating node { 192.168.72.76 8443 v1.31.2 crio true true} ...
	I1210 01:08:22.444702  132693 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-274758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-274758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:08:22.444801  132693 ssh_runner.go:195] Run: crio config
	I1210 01:08:22.484278  132693 cni.go:84] Creating CNI manager for ""
	I1210 01:08:22.484301  132693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:08:22.484311  132693 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:08:22.484345  132693 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.76 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-274758 NodeName:embed-certs-274758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.76"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.76 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 01:08:22.484508  132693 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.76
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-274758"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.76"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.76"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:08:22.484573  132693 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 01:08:22.493746  132693 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:08:22.493827  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:08:22.503898  132693 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1210 01:08:22.520349  132693 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:08:22.536653  132693 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1210 01:08:22.553389  132693 ssh_runner.go:195] Run: grep 192.168.72.76	control-plane.minikube.internal$ /etc/hosts
	I1210 01:08:22.556933  132693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.76	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:22.569060  132693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:22.709124  132693 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:08:22.728316  132693 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758 for IP: 192.168.72.76
	I1210 01:08:22.728342  132693 certs.go:194] generating shared ca certs ...
	I1210 01:08:22.728382  132693 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:08:22.728564  132693 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:08:22.728619  132693 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:08:22.728633  132693 certs.go:256] generating profile certs ...
	I1210 01:08:22.728764  132693 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/client.key
	I1210 01:08:22.728852  132693 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/apiserver.key.ec69c041
	I1210 01:08:22.728906  132693 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/proxy-client.key
	I1210 01:08:22.729067  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:08:22.729121  132693 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:08:22.729144  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:08:22.729186  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:08:22.729223  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:08:22.729254  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:08:22.729313  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:22.730259  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:08:22.786992  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:08:22.813486  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:08:22.840236  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:08:22.870078  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1210 01:08:22.896484  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 01:08:22.917547  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:08:22.940550  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 01:08:22.964784  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:08:22.987389  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:08:23.009860  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:08:23.032300  132693 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:08:23.048611  132693 ssh_runner.go:195] Run: openssl version
	I1210 01:08:23.053927  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:08:23.064731  132693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:08:23.068872  132693 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:08:23.068917  132693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:08:23.074207  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:08:23.085278  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:08:23.096087  132693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:23.100106  132693 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:23.100155  132693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:23.105408  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:08:23.114862  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:08:23.124112  132693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:08:23.127915  132693 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:08:23.127958  132693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:08:23.132972  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:08:23.142672  132693 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:08:23.146554  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:08:23.152071  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:08:23.157606  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:08:23.162974  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:08:23.168059  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:08:23.173354  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:08:23.178612  132693 kubeadm.go:392] StartCluster: {Name:embed-certs-274758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-274758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:08:23.178733  132693 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:08:23.178788  132693 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:23.214478  132693 cri.go:89] found id: ""
	I1210 01:08:23.214545  132693 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:08:23.223871  132693 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:08:23.223897  132693 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:08:23.223956  132693 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:08:23.232839  132693 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:08:23.233836  132693 kubeconfig.go:125] found "embed-certs-274758" server: "https://192.168.72.76:8443"
	I1210 01:08:23.235958  132693 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:08:23.244484  132693 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.76
	I1210 01:08:23.244517  132693 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:08:23.244529  132693 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:08:23.244578  132693 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:23.282997  132693 cri.go:89] found id: ""
	I1210 01:08:23.283063  132693 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:08:23.298971  132693 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:08:23.307664  132693 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:08:23.307690  132693 kubeadm.go:157] found existing configuration files:
	
	I1210 01:08:23.307739  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:08:23.316208  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:08:23.316259  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:08:23.324410  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:08:23.332254  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:08:23.332303  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:08:23.340482  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:08:23.348584  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:08:23.348636  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:08:23.356760  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:08:23.364508  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:08:23.364564  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:08:23.372644  132693 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:08:23.380791  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:23.481384  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.558104  132693 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.076675674s)
	I1210 01:08:24.558155  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.743002  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.812833  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.910903  132693 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:08:24.911007  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:25.411815  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:25.911457  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:26.411340  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:25.276751  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:25.277027  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:25.277058  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:25.276996  134112 retry.go:31] will retry after 1.531276871s: waiting for machine to come up
	I1210 01:08:26.809860  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:26.810332  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:26.810365  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:26.810270  134112 retry.go:31] will retry after 2.029725217s: waiting for machine to come up
	I1210 01:08:28.842419  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:28.842945  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:28.842979  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:28.842895  134112 retry.go:31] will retry after 2.777752063s: waiting for machine to come up
	I1210 01:08:26.911681  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:26.925244  132693 api_server.go:72] duration metric: took 2.014341005s to wait for apiserver process to appear ...
	I1210 01:08:26.925276  132693 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:08:26.925307  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:29.461167  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:08:29.461199  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:08:29.461221  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:29.490907  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:08:29.490935  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:08:29.925947  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:29.938161  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:08:29.938197  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:08:30.425822  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:30.448700  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:08:30.448741  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:08:30.926368  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:30.930770  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 200:
	ok
	I1210 01:08:30.936664  132693 api_server.go:141] control plane version: v1.31.2
	I1210 01:08:30.936706  132693 api_server.go:131] duration metric: took 4.011421056s to wait for apiserver health ...
	I1210 01:08:30.936719  132693 cni.go:84] Creating CNI manager for ""
	I1210 01:08:30.936731  132693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:08:30.938509  132693 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:08:30.939651  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:08:30.949390  132693 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:08:30.973739  132693 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:08:30.988397  132693 system_pods.go:59] 8 kube-system pods found
	I1210 01:08:30.988441  132693 system_pods.go:61] "coredns-7c65d6cfc9-g98k2" [4358eb5a-fa28-405d-b6a4-66d232c1b060] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 01:08:30.988451  132693 system_pods.go:61] "etcd-embed-certs-274758" [11343776-d268-428f-9af8-4d20e4c1dda4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 01:08:30.988461  132693 system_pods.go:61] "kube-apiserver-embed-certs-274758" [c60d7a8e-e029-47ec-8f9d-5531aaeeb595] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 01:08:30.988471  132693 system_pods.go:61] "kube-controller-manager-embed-certs-274758" [53c0e257-c3c1-410b-8ce5-8350530160c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 01:08:30.988478  132693 system_pods.go:61] "kube-proxy-d29zg" [cbf2dba9-1c85-4e21-bf0b-01cf3fcd00df] Running
	I1210 01:08:30.988503  132693 system_pods.go:61] "kube-scheduler-embed-certs-274758" [6ecaa7c9-f7b6-450d-941c-8ccf582af275] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 01:08:30.988516  132693 system_pods.go:61] "metrics-server-6867b74b74-mhxtf" [2874a85a-c957-4056-b60e-be170f3c1ab2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:08:30.988527  132693 system_pods.go:61] "storage-provisioner" [7e2b93e2-0f25-4bb1-bca6-02a8ea5336ed] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 01:08:30.988539  132693 system_pods.go:74] duration metric: took 14.779044ms to wait for pod list to return data ...
	I1210 01:08:30.988567  132693 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:08:30.993600  132693 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:08:30.993632  132693 node_conditions.go:123] node cpu capacity is 2
	I1210 01:08:30.993652  132693 node_conditions.go:105] duration metric: took 5.074866ms to run NodePressure ...
	I1210 01:08:30.993680  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:31.251140  132693 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 01:08:31.254339  132693 kubeadm.go:739] kubelet initialised
	I1210 01:08:31.254358  132693 kubeadm.go:740] duration metric: took 3.193934ms waiting for restarted kubelet to initialise ...
	I1210 01:08:31.254367  132693 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:08:31.259628  132693 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.264379  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.264406  132693 pod_ready.go:82] duration metric: took 4.746678ms for pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.264417  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.264434  132693 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.268773  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "etcd-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.268794  132693 pod_ready.go:82] duration metric: took 4.345772ms for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.268804  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "etcd-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.268812  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.272890  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.272911  132693 pod_ready.go:82] duration metric: took 4.087379ms for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.272921  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.272929  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.377990  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.378020  132693 pod_ready.go:82] duration metric: took 105.077792ms for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.378033  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.378041  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-d29zg" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.777563  132693 pod_ready.go:93] pod "kube-proxy-d29zg" in "kube-system" namespace has status "Ready":"True"
	I1210 01:08:31.777584  132693 pod_ready.go:82] duration metric: took 399.533068ms for pod "kube-proxy-d29zg" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.777598  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.623742  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:31.624253  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:31.624289  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:31.624189  134112 retry.go:31] will retry after 3.852910592s: waiting for machine to come up
	I1210 01:08:36.766538  133282 start.go:364] duration metric: took 3m54.06534367s to acquireMachinesLock for "default-k8s-diff-port-901295"
	I1210 01:08:36.766623  133282 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:08:36.766636  133282 fix.go:54] fixHost starting: 
	I1210 01:08:36.767069  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:08:36.767139  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:08:36.785475  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41443
	I1210 01:08:36.786023  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:08:36.786614  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:08:36.786640  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:08:36.786956  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:08:36.787147  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:36.787295  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:08:36.788719  133282 fix.go:112] recreateIfNeeded on default-k8s-diff-port-901295: state=Stopped err=<nil>
	I1210 01:08:36.788745  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	W1210 01:08:36.788889  133282 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:08:36.791479  133282 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-901295" ...
	I1210 01:08:33.784092  132693 pod_ready.go:103] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:35.784732  132693 pod_ready.go:103] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:36.792712  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Start
	I1210 01:08:36.792883  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Ensuring networks are active...
	I1210 01:08:36.793559  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Ensuring network default is active
	I1210 01:08:36.793891  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Ensuring network mk-default-k8s-diff-port-901295 is active
	I1210 01:08:36.794354  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Getting domain xml...
	I1210 01:08:36.795038  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Creating domain...
	I1210 01:08:35.480373  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.480901  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has current primary IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.480926  133241 main.go:141] libmachine: (old-k8s-version-094470) Found IP for machine: 192.168.61.11
	I1210 01:08:35.480955  133241 main.go:141] libmachine: (old-k8s-version-094470) Reserving static IP address...
	I1210 01:08:35.481323  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "old-k8s-version-094470", mac: "52:54:00:00:f3:52", ip: "192.168.61.11"} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.481352  133241 main.go:141] libmachine: (old-k8s-version-094470) Reserved static IP address: 192.168.61.11
	I1210 01:08:35.481370  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | skip adding static IP to network mk-old-k8s-version-094470 - found existing host DHCP lease matching {name: "old-k8s-version-094470", mac: "52:54:00:00:f3:52", ip: "192.168.61.11"}
	I1210 01:08:35.481392  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | Getting to WaitForSSH function...
	I1210 01:08:35.481408  133241 main.go:141] libmachine: (old-k8s-version-094470) Waiting for SSH to be available...
	I1210 01:08:35.483785  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.484269  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.484314  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.484458  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | Using SSH client type: external
	I1210 01:08:35.484493  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa (-rw-------)
	I1210 01:08:35.484526  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:08:35.484548  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | About to run SSH command:
	I1210 01:08:35.484557  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | exit 0
	I1210 01:08:35.610216  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | SSH cmd err, output: <nil>: 
	I1210 01:08:35.610554  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetConfigRaw
	I1210 01:08:35.611179  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:35.613811  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.614184  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.614221  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.614448  133241 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/config.json ...
	I1210 01:08:35.614659  133241 machine.go:93] provisionDockerMachine start ...
	I1210 01:08:35.614681  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:35.614861  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.616965  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.617478  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.617507  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.617606  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:35.617741  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.617880  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.617993  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:35.618166  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:35.618416  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:35.618431  133241 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:08:35.730293  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:08:35.730326  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 01:08:35.730614  133241 buildroot.go:166] provisioning hostname "old-k8s-version-094470"
	I1210 01:08:35.730647  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 01:08:35.730902  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.733604  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.733943  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.733963  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.734110  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:35.734290  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.734436  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.734589  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:35.734737  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:35.734921  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:35.734937  133241 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-094470 && echo "old-k8s-version-094470" | sudo tee /etc/hostname
	I1210 01:08:35.856219  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-094470
	
	I1210 01:08:35.856272  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.859777  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.860157  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.860194  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.860364  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:35.860590  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.860808  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.860948  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:35.861145  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:35.861370  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:35.861391  133241 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-094470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-094470/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-094470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:08:35.984487  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:08:35.984523  133241 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:08:35.984571  133241 buildroot.go:174] setting up certificates
	I1210 01:08:35.984585  133241 provision.go:84] configureAuth start
	I1210 01:08:35.984596  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 01:08:35.984888  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:35.987515  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.987891  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.987920  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.988078  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.990428  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.990806  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.990838  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.991028  133241 provision.go:143] copyHostCerts
	I1210 01:08:35.991108  133241 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:08:35.991125  133241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:08:35.991208  133241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:08:35.991378  133241 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:08:35.991396  133241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:08:35.991436  133241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:08:35.991548  133241 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:08:35.991560  133241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:08:35.991593  133241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:08:35.991684  133241 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-094470 san=[127.0.0.1 192.168.61.11 localhost minikube old-k8s-version-094470]
	I1210 01:08:36.166767  133241 provision.go:177] copyRemoteCerts
	I1210 01:08:36.166825  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:08:36.166872  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.169777  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.170166  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.170196  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.170452  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.170662  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.170837  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.170985  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.255600  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:08:36.277974  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1210 01:08:36.299608  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 01:08:36.320325  133241 provision.go:87] duration metric: took 335.730286ms to configureAuth
	I1210 01:08:36.320346  133241 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:08:36.320502  133241 config.go:182] Loaded profile config "old-k8s-version-094470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1210 01:08:36.320572  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.323358  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.323810  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.323836  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.324012  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.324213  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.324351  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.324479  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.324608  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:36.324773  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:36.324789  133241 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:08:36.538020  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:08:36.538052  133241 machine.go:96] duration metric: took 923.37742ms to provisionDockerMachine
	I1210 01:08:36.538065  133241 start.go:293] postStartSetup for "old-k8s-version-094470" (driver="kvm2")
	I1210 01:08:36.538075  133241 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:08:36.538092  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.538437  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:08:36.538473  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.540836  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.541187  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.541229  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.541400  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.541594  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.541728  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.541852  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.623740  133241 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:08:36.627323  133241 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:08:36.627343  133241 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:08:36.627405  133241 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:08:36.627487  133241 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:08:36.627568  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:08:36.635720  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:36.656793  133241 start.go:296] duration metric: took 118.715633ms for postStartSetup
	I1210 01:08:36.656832  133241 fix.go:56] duration metric: took 19.077955657s for fixHost
	I1210 01:08:36.656853  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.659288  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.659586  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.659618  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.659772  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.659961  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.660132  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.660250  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.660391  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:36.660552  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:36.660562  133241 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:08:36.766355  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792916.738645658
	
	I1210 01:08:36.766375  133241 fix.go:216] guest clock: 1733792916.738645658
	I1210 01:08:36.766382  133241 fix.go:229] Guest: 2024-12-10 01:08:36.738645658 +0000 UTC Remote: 2024-12-10 01:08:36.656836618 +0000 UTC m=+237.074026661 (delta=81.80904ms)
	I1210 01:08:36.766420  133241 fix.go:200] guest clock delta is within tolerance: 81.80904ms
	I1210 01:08:36.766429  133241 start.go:83] releasing machines lock for "old-k8s-version-094470", held for 19.187587757s
	I1210 01:08:36.766461  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.766761  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:36.769758  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.770129  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.770150  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.770309  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.770818  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.770992  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.771090  133241 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:08:36.771157  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.771182  133241 ssh_runner.go:195] Run: cat /version.json
	I1210 01:08:36.771203  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.773923  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774103  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774272  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.774292  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774434  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.774545  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.774585  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774616  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.774728  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.774817  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.774843  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.774975  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.775004  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.775148  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.875634  133241 ssh_runner.go:195] Run: systemctl --version
	I1210 01:08:36.880774  133241 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:08:37.023282  133241 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:08:37.029380  133241 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:08:37.029436  133241 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:08:37.044071  133241 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:08:37.044093  133241 start.go:495] detecting cgroup driver to use...
	I1210 01:08:37.044157  133241 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:08:37.058626  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:08:37.070607  133241 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:08:37.070659  133241 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:08:37.086913  133241 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:08:37.102676  133241 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:08:37.221862  133241 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:08:37.373086  133241 docker.go:233] disabling docker service ...
	I1210 01:08:37.373166  133241 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:08:37.386711  133241 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:08:37.399414  133241 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:08:37.546237  133241 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:08:37.660681  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:08:37.673736  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:08:37.690107  133241 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1210 01:08:37.690180  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.700871  133241 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:08:37.700920  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.711545  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.722078  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.732603  133241 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:08:37.743617  133241 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:08:37.753641  133241 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:08:37.753699  133241 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:08:37.765737  133241 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:08:37.774173  133241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:37.891188  133241 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:08:37.983170  133241 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:08:37.983248  133241 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:08:37.987987  133241 start.go:563] Will wait 60s for crictl version
	I1210 01:08:37.988049  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:37.993150  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:08:38.045191  133241 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:08:38.045281  133241 ssh_runner.go:195] Run: crio --version
	I1210 01:08:38.071768  133241 ssh_runner.go:195] Run: crio --version
	I1210 01:08:38.100869  133241 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1210 01:08:38.102141  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:38.104790  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:38.105112  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:38.105143  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:38.105337  133241 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1210 01:08:38.109454  133241 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:38.120925  133241 kubeadm.go:883] updating cluster {Name:old-k8s-version-094470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:08:38.121060  133241 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1210 01:08:38.121130  133241 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:38.169400  133241 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1210 01:08:38.169462  133241 ssh_runner.go:195] Run: which lz4
	I1210 01:08:38.172973  133241 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 01:08:38.176684  133241 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 01:08:38.176715  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1210 01:08:38.285566  132693 pod_ready.go:103] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:38.784437  132693 pod_ready.go:93] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:08:38.784470  132693 pod_ready.go:82] duration metric: took 7.006865777s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:38.784480  132693 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:40.791489  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:38.076463  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting to get IP...
	I1210 01:08:38.077256  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.077650  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.077706  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:38.077616  134254 retry.go:31] will retry after 287.089061ms: waiting for machine to come up
	I1210 01:08:38.366347  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.366906  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.366937  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:38.366866  134254 retry.go:31] will retry after 359.654145ms: waiting for machine to come up
	I1210 01:08:38.728592  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.729111  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.729144  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:38.729048  134254 retry.go:31] will retry after 299.617496ms: waiting for machine to come up
	I1210 01:08:39.030785  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.031359  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.031382  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:39.031312  134254 retry.go:31] will retry after 586.950887ms: waiting for machine to come up
	I1210 01:08:39.620247  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.620872  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.620903  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:39.620802  134254 retry.go:31] will retry after 623.103267ms: waiting for machine to come up
	I1210 01:08:40.245322  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.245640  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.245669  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:40.245600  134254 retry.go:31] will retry after 712.603102ms: waiting for machine to come up
	I1210 01:08:40.960316  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.960862  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.960892  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:40.960806  134254 retry.go:31] will retry after 999.356089ms: waiting for machine to come up
	I1210 01:08:41.961395  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:41.961902  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:41.961929  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:41.961862  134254 retry.go:31] will retry after 1.050049361s: waiting for machine to come up
	I1210 01:08:39.654620  133241 crio.go:462] duration metric: took 1.481673499s to copy over tarball
	I1210 01:08:39.654705  133241 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 01:08:42.473447  133241 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.818699717s)
	I1210 01:08:42.473486  133241 crio.go:469] duration metric: took 2.818833041s to extract the tarball
	I1210 01:08:42.473496  133241 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 01:08:42.514635  133241 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:42.546161  133241 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1210 01:08:42.546204  133241 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 01:08:42.546276  133241 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:08:42.546315  133241 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.546339  133241 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.546344  133241 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.546347  133241 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.546306  133241 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1210 01:08:42.546372  133241 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.546315  133241 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.548134  133241 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.548150  133241 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1210 01:08:42.548149  133241 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.548162  133241 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:08:42.548134  133241 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.548135  133241 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.548138  133241 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.548326  133241 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.700402  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.706096  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.716669  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.717025  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.723380  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.727890  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.740867  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1210 01:08:42.775300  133241 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1210 01:08:42.775345  133241 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.775393  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.827802  133241 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1210 01:08:42.827855  133241 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.827873  133241 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1210 01:08:42.827906  133241 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.827936  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.827953  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.851952  133241 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1210 01:08:42.851998  133241 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.852063  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872369  133241 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1210 01:08:42.872408  133241 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.872446  133241 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1210 01:08:42.872479  133241 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1210 01:08:42.872489  133241 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.872497  133241 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1210 01:08:42.872516  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.872535  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872535  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872458  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872578  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.872638  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.872672  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.952963  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.952964  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 01:08:42.956464  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.956535  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.956580  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.956614  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.956681  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:43.035636  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 01:08:43.086938  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:43.087032  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:43.104765  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:43.104844  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:43.104891  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 01:08:43.109871  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:43.122137  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 01:08:43.193838  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:43.256301  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:43.256342  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1210 01:08:43.256431  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1210 01:08:43.258819  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1210 01:08:43.258928  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1210 01:08:43.259011  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1210 01:08:43.281411  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1210 01:08:43.300319  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1210 01:08:43.334327  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:08:43.478183  133241 cache_images.go:92] duration metric: took 931.957836ms to LoadCachedImages
	W1210 01:08:43.478292  133241 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1210 01:08:43.478310  133241 kubeadm.go:934] updating node { 192.168.61.11 8443 v1.20.0 crio true true} ...
	I1210 01:08:43.478501  133241 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-094470 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:08:43.478610  133241 ssh_runner.go:195] Run: crio config
	I1210 01:08:43.523627  133241 cni.go:84] Creating CNI manager for ""
	I1210 01:08:43.523651  133241 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:08:43.523660  133241 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:08:43.523680  133241 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.11 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-094470 NodeName:old-k8s-version-094470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1210 01:08:43.523872  133241 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-094470"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:08:43.523947  133241 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1210 01:08:43.534926  133241 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:08:43.535015  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:08:43.544420  133241 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1210 01:08:43.561582  133241 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:08:43.578427  133241 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1210 01:08:43.595593  133241 ssh_runner.go:195] Run: grep 192.168.61.11	control-plane.minikube.internal$ /etc/hosts
	I1210 01:08:43.599137  133241 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:43.610483  133241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:43.750543  133241 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:08:43.766573  133241 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470 for IP: 192.168.61.11
	I1210 01:08:43.766599  133241 certs.go:194] generating shared ca certs ...
	I1210 01:08:43.766628  133241 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:08:43.766828  133241 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:08:43.766881  133241 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:08:43.766897  133241 certs.go:256] generating profile certs ...
	I1210 01:08:43.767022  133241 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.key
	I1210 01:08:43.767097  133241 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.key.11e7a196
	I1210 01:08:43.767158  133241 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.key
	I1210 01:08:43.767318  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:08:43.767359  133241 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:08:43.767391  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:08:43.767428  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:08:43.767461  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:08:43.767502  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:08:43.767554  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:43.768599  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:08:43.825215  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:08:43.852218  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:08:43.888256  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:08:43.921633  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1210 01:08:43.954815  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 01:08:43.986660  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:08:44.009065  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 01:08:44.030476  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:08:44.053232  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:08:44.078371  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:08:44.100076  133241 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:08:44.115731  133241 ssh_runner.go:195] Run: openssl version
	I1210 01:08:44.121192  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:08:44.130554  133241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:08:44.134639  133241 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:08:44.134697  133241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:08:44.140323  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:08:44.150593  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:08:44.160638  133241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:44.165053  133241 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:44.165121  133241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:44.170391  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:08:44.180113  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:08:44.189938  133241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:08:44.193880  133241 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:08:44.193931  133241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:08:44.199419  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:08:44.209346  133241 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:08:44.213474  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:08:44.218965  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:08:44.224344  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:08:44.229835  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:08:44.235365  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:08:44.240697  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:08:44.245999  133241 kubeadm.go:392] StartCluster: {Name:old-k8s-version-094470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:08:44.246102  133241 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:08:44.246164  133241 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:44.287050  133241 cri.go:89] found id: ""
	I1210 01:08:44.287167  133241 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:08:44.297028  133241 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:08:44.297044  133241 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:08:44.297092  133241 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:08:44.306118  133241 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:08:44.307143  133241 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-094470" does not appear in /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:08:44.307777  133241 kubeconfig.go:62] /home/jenkins/minikube-integration/20062-79135/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-094470" cluster setting kubeconfig missing "old-k8s-version-094470" context setting]
	I1210 01:08:44.308663  133241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:08:44.394164  133241 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:08:44.406683  133241 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.11
	I1210 01:08:44.406723  133241 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:08:44.406739  133241 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:08:44.406799  133241 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:44.444917  133241 cri.go:89] found id: ""
	I1210 01:08:44.444995  133241 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:08:44.465693  133241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:08:44.475399  133241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:08:44.475424  133241 kubeadm.go:157] found existing configuration files:
	
	I1210 01:08:44.475482  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:08:44.483802  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:08:44.483844  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:08:44.492395  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:08:44.501080  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:08:44.501141  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:08:44.509973  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:08:44.518103  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:08:44.518176  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:08:44.527145  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:08:44.535124  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:08:44.535179  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:08:44.543773  133241 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:08:44.552533  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:42.791894  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:45.934242  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:43.013971  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:43.014430  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:43.014467  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:43.014369  134254 retry.go:31] will retry after 1.273602138s: waiting for machine to come up
	I1210 01:08:44.289131  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:44.289686  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:44.289720  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:44.289616  134254 retry.go:31] will retry after 1.911761795s: waiting for machine to come up
	I1210 01:08:46.203851  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:46.204263  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:46.204321  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:46.204199  134254 retry.go:31] will retry after 2.653257729s: waiting for machine to come up
	I1210 01:08:44.667527  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.368529  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.572674  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.671006  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.759483  133241 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:08:45.759588  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:46.260599  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:46.759851  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:47.260403  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:47.760555  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:48.259665  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:48.760390  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:49.260450  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:48.292324  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:50.789665  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:48.859690  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:48.860078  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:48.860108  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:48.860029  134254 retry.go:31] will retry after 3.186060231s: waiting for machine to come up
	I1210 01:08:52.048071  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:52.048524  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:52.048554  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:52.048478  134254 retry.go:31] will retry after 2.823038983s: waiting for machine to come up
	I1210 01:08:49.759795  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:50.260493  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:50.760146  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:51.259783  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:51.760554  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:52.260543  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:52.760452  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:53.260523  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:53.759677  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:54.259750  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:56.158844  132605 start.go:364] duration metric: took 51.38781342s to acquireMachinesLock for "no-preload-584179"
	I1210 01:08:56.158913  132605 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:08:56.158923  132605 fix.go:54] fixHost starting: 
	I1210 01:08:56.159339  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:08:56.159381  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:08:56.178552  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34315
	I1210 01:08:56.178997  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:08:56.179471  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:08:56.179497  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:08:56.179803  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:08:56.179977  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:08:56.180119  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:08:56.181496  132605 fix.go:112] recreateIfNeeded on no-preload-584179: state=Stopped err=<nil>
	I1210 01:08:56.181521  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	W1210 01:08:56.181661  132605 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:08:56.183508  132605 out.go:177] * Restarting existing kvm2 VM for "no-preload-584179" ...
	I1210 01:08:52.790210  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:54.790515  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:56.184725  132605 main.go:141] libmachine: (no-preload-584179) Calling .Start
	I1210 01:08:56.184883  132605 main.go:141] libmachine: (no-preload-584179) Ensuring networks are active...
	I1210 01:08:56.185680  132605 main.go:141] libmachine: (no-preload-584179) Ensuring network default is active
	I1210 01:08:56.186043  132605 main.go:141] libmachine: (no-preload-584179) Ensuring network mk-no-preload-584179 is active
	I1210 01:08:56.186427  132605 main.go:141] libmachine: (no-preload-584179) Getting domain xml...
	I1210 01:08:56.187126  132605 main.go:141] libmachine: (no-preload-584179) Creating domain...
	I1210 01:08:54.875474  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.875880  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has current primary IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.875902  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Found IP for machine: 192.168.39.193
	I1210 01:08:54.875918  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Reserving static IP address...
	I1210 01:08:54.876379  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-901295", mac: "52:54:00:f7:2f:3d", ip: "192.168.39.193"} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:54.876411  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Reserved static IP address: 192.168.39.193
	I1210 01:08:54.876434  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | skip adding static IP to network mk-default-k8s-diff-port-901295 - found existing host DHCP lease matching {name: "default-k8s-diff-port-901295", mac: "52:54:00:f7:2f:3d", ip: "192.168.39.193"}
	I1210 01:08:54.876456  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Getting to WaitForSSH function...
	I1210 01:08:54.876473  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for SSH to be available...
	I1210 01:08:54.878454  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.878758  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:54.878787  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.878940  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Using SSH client type: external
	I1210 01:08:54.878969  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa (-rw-------)
	I1210 01:08:54.878993  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.193 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:08:54.879003  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | About to run SSH command:
	I1210 01:08:54.879011  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | exit 0
	I1210 01:08:55.006046  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | SSH cmd err, output: <nil>: 
	I1210 01:08:55.006394  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetConfigRaw
	I1210 01:08:55.007100  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:55.009429  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.009753  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.009803  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.010054  133282 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/config.json ...
	I1210 01:08:55.010278  133282 machine.go:93] provisionDockerMachine start ...
	I1210 01:08:55.010302  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:55.010513  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.012899  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.013198  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.013248  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.013340  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.013509  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.013643  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.013726  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.013879  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.014070  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.014081  133282 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:08:55.126262  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:08:55.126294  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetMachineName
	I1210 01:08:55.126547  133282 buildroot.go:166] provisioning hostname "default-k8s-diff-port-901295"
	I1210 01:08:55.126592  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetMachineName
	I1210 01:08:55.126756  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.129397  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.129769  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.129798  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.129921  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.130071  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.130187  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.130279  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.130380  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.130545  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.130572  133282 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-901295 && echo "default-k8s-diff-port-901295" | sudo tee /etc/hostname
	I1210 01:08:55.256829  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-901295
	
	I1210 01:08:55.256857  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.259599  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.259977  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.260006  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.260257  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.260456  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.260645  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.260795  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.260996  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.261212  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.261239  133282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-901295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-901295/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-901295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:08:55.387808  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:08:55.387837  133282 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:08:55.387872  133282 buildroot.go:174] setting up certificates
	I1210 01:08:55.387883  133282 provision.go:84] configureAuth start
	I1210 01:08:55.387897  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetMachineName
	I1210 01:08:55.388193  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:55.391297  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.391649  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.391683  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.391799  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.393859  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.394150  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.394176  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.394272  133282 provision.go:143] copyHostCerts
	I1210 01:08:55.394336  133282 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:08:55.394353  133282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:08:55.394411  133282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:08:55.394501  133282 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:08:55.394508  133282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:08:55.394530  133282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:08:55.394615  133282 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:08:55.394624  133282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:08:55.394643  133282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:08:55.394693  133282 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-901295 san=[127.0.0.1 192.168.39.193 default-k8s-diff-port-901295 localhost minikube]
	I1210 01:08:55.502253  133282 provision.go:177] copyRemoteCerts
	I1210 01:08:55.502313  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:08:55.502341  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.504919  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.505216  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.505252  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.505425  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.505613  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.505749  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.505932  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:55.593242  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:08:55.616378  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 01:08:55.638786  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 01:08:55.660268  133282 provision.go:87] duration metric: took 272.369019ms to configureAuth
	I1210 01:08:55.660293  133282 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:08:55.660506  133282 config.go:182] Loaded profile config "default-k8s-diff-port-901295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:08:55.660597  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.662964  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.663283  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.663312  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.663461  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.663656  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.663820  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.663944  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.664091  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.664330  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.664354  133282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:08:55.918356  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:08:55.918389  133282 machine.go:96] duration metric: took 908.095325ms to provisionDockerMachine
	I1210 01:08:55.918402  133282 start.go:293] postStartSetup for "default-k8s-diff-port-901295" (driver="kvm2")
	I1210 01:08:55.918415  133282 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:08:55.918450  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:55.918790  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:08:55.918823  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.921575  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.921897  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.921929  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.922026  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.922205  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.922375  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.922485  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:56.008442  133282 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:08:56.012149  133282 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:08:56.012165  133282 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:08:56.012239  133282 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:08:56.012325  133282 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:08:56.012428  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:08:56.021144  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:56.042869  133282 start.go:296] duration metric: took 124.452091ms for postStartSetup
	I1210 01:08:56.042914  133282 fix.go:56] duration metric: took 19.276278483s for fixHost
	I1210 01:08:56.042940  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:56.045280  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.045612  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.045644  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.045845  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:56.046002  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.046123  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.046224  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:56.046353  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:56.046530  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:56.046541  133282 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:08:56.158690  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792936.125620375
	
	I1210 01:08:56.158714  133282 fix.go:216] guest clock: 1733792936.125620375
	I1210 01:08:56.158722  133282 fix.go:229] Guest: 2024-12-10 01:08:56.125620375 +0000 UTC Remote: 2024-12-10 01:08:56.042918319 +0000 UTC m=+253.475376365 (delta=82.702056ms)
	I1210 01:08:56.158741  133282 fix.go:200] guest clock delta is within tolerance: 82.702056ms
	I1210 01:08:56.158746  133282 start.go:83] releasing machines lock for "default-k8s-diff-port-901295", held for 19.392149024s
	I1210 01:08:56.158769  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.159017  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:56.161998  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.162321  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.162350  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.162541  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.163022  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.163197  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.163296  133282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:08:56.163346  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:56.163449  133282 ssh_runner.go:195] Run: cat /version.json
	I1210 01:08:56.163481  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:56.166071  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.166443  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.166475  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.166500  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.166750  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:56.166897  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.166920  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.166929  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.167083  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:56.167089  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:56.167255  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.167258  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:56.167400  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:56.167529  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:56.273144  133282 ssh_runner.go:195] Run: systemctl --version
	I1210 01:08:56.278678  133282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:08:56.423921  133282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:08:56.429467  133282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:08:56.429537  133282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:08:56.443900  133282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:08:56.443927  133282 start.go:495] detecting cgroup driver to use...
	I1210 01:08:56.443996  133282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:08:56.458653  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:08:56.471717  133282 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:08:56.471798  133282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:08:56.483960  133282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:08:56.495903  133282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:08:56.604493  133282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:08:56.741771  133282 docker.go:233] disabling docker service ...
	I1210 01:08:56.741846  133282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:08:56.755264  133282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:08:56.767590  133282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:08:56.922151  133282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:08:57.045410  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:08:57.061217  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:08:57.079488  133282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 01:08:57.079552  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.090356  133282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:08:57.090434  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.100784  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.111326  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.120417  133282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:08:57.129871  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.140489  133282 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.157524  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.167947  133282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:08:57.176904  133282 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:08:57.176947  133282 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:08:57.188925  133282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:08:57.197558  133282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:57.319427  133282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:08:57.419493  133282 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:08:57.419570  133282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:08:57.424302  133282 start.go:563] Will wait 60s for crictl version
	I1210 01:08:57.424362  133282 ssh_runner.go:195] Run: which crictl
	I1210 01:08:57.428067  133282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:08:57.468247  133282 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:08:57.468319  133282 ssh_runner.go:195] Run: crio --version
	I1210 01:08:57.497834  133282 ssh_runner.go:195] Run: crio --version
	I1210 01:08:57.527032  133282 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 01:08:57.528284  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:57.531510  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:57.531882  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:57.531908  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:57.532178  133282 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 01:08:57.536149  133282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:57.548081  133282 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:08:57.548221  133282 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:08:57.548283  133282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:57.585539  133282 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 01:08:57.585619  133282 ssh_runner.go:195] Run: which lz4
	I1210 01:08:57.590131  133282 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 01:08:57.595506  133282 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 01:08:57.595534  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1210 01:08:54.760444  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:55.259774  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:55.759929  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:56.260379  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:56.759985  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:57.260495  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:57.759699  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:58.260475  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:58.759732  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:59.260424  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:57.291502  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:59.792026  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:01.793182  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:57.453911  132605 main.go:141] libmachine: (no-preload-584179) Waiting to get IP...
	I1210 01:08:57.455000  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:57.455393  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:57.455472  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:57.455384  134419 retry.go:31] will retry after 189.932045ms: waiting for machine to come up
	I1210 01:08:57.646978  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:57.647486  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:57.647520  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:57.647418  134419 retry.go:31] will retry after 278.873511ms: waiting for machine to come up
	I1210 01:08:57.928222  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:57.928797  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:57.928837  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:57.928738  134419 retry.go:31] will retry after 468.940412ms: waiting for machine to come up
	I1210 01:08:58.399469  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:58.400105  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:58.400131  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:58.400041  134419 retry.go:31] will retry after 459.796386ms: waiting for machine to come up
	I1210 01:08:58.861581  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:58.862042  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:58.862075  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:58.861985  134419 retry.go:31] will retry after 493.349488ms: waiting for machine to come up
	I1210 01:08:59.356810  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:59.357338  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:59.357365  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:59.357314  134419 retry.go:31] will retry after 736.790492ms: waiting for machine to come up
	I1210 01:09:00.095779  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:00.096246  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:00.096281  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:00.096182  134419 retry.go:31] will retry after 1.059095907s: waiting for machine to come up
	I1210 01:09:01.157286  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:01.157718  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:01.157759  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:01.157656  134419 retry.go:31] will retry after 1.18137171s: waiting for machine to come up
	I1210 01:08:58.835009  133282 crio.go:462] duration metric: took 1.24490918s to copy over tarball
	I1210 01:08:58.835108  133282 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 01:09:00.985062  133282 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.149905713s)
	I1210 01:09:00.985097  133282 crio.go:469] duration metric: took 2.150055868s to extract the tarball
	I1210 01:09:00.985108  133282 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 01:09:01.032869  133282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:09:01.074578  133282 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 01:09:01.074609  133282 cache_images.go:84] Images are preloaded, skipping loading
	I1210 01:09:01.074618  133282 kubeadm.go:934] updating node { 192.168.39.193 8444 v1.31.2 crio true true} ...
	I1210 01:09:01.074727  133282 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-901295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:09:01.074794  133282 ssh_runner.go:195] Run: crio config
	I1210 01:09:01.133905  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:09:01.133943  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:01.133965  133282 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:09:01.133999  133282 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.193 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-901295 NodeName:default-k8s-diff-port-901295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 01:09:01.134201  133282 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.193
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-901295"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.193"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.193"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:09:01.134264  133282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 01:09:01.147844  133282 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:09:01.147931  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:09:01.160432  133282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 01:09:01.180526  133282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:09:01.200698  133282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1210 01:09:01.216799  133282 ssh_runner.go:195] Run: grep 192.168.39.193	control-plane.minikube.internal$ /etc/hosts
	I1210 01:09:01.220381  133282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.193	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:09:01.233079  133282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:01.361483  133282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:09:01.380679  133282 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295 for IP: 192.168.39.193
	I1210 01:09:01.380702  133282 certs.go:194] generating shared ca certs ...
	I1210 01:09:01.380722  133282 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:01.380921  133282 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:09:01.380994  133282 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:09:01.381010  133282 certs.go:256] generating profile certs ...
	I1210 01:09:01.381136  133282 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/client.key
	I1210 01:09:01.381229  133282 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/apiserver.key.b900309b
	I1210 01:09:01.381286  133282 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/proxy-client.key
	I1210 01:09:01.381437  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:09:01.381489  133282 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:09:01.381500  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:09:01.381537  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:09:01.381568  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:09:01.381598  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:09:01.381658  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:09:01.382643  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:09:01.437062  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:09:01.472383  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:09:01.503832  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:09:01.532159  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 01:09:01.555926  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 01:09:01.578213  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:09:01.599047  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 01:09:01.620628  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:09:01.643326  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:09:01.665846  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:09:01.688854  133282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:09:01.706519  133282 ssh_runner.go:195] Run: openssl version
	I1210 01:09:01.712053  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:09:01.722297  133282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:09:01.726404  133282 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:09:01.726491  133282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:09:01.731901  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:09:01.745040  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:09:01.758663  133282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:01.763894  133282 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:01.763945  133282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:01.771019  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:09:01.781071  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:09:01.790898  133282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:09:01.795494  133282 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:09:01.795557  133282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:09:01.800996  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:09:01.811221  133282 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:09:01.815412  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:09:01.821621  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:09:01.829028  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:09:01.838361  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:09:01.844663  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:09:01.850154  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:09:01.855539  133282 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:09:01.855625  133282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:09:01.855663  133282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:01.898021  133282 cri.go:89] found id: ""
	I1210 01:09:01.898095  133282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:09:01.908929  133282 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:09:01.908947  133282 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:09:01.909005  133282 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:09:01.917830  133282 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:09:01.918982  133282 kubeconfig.go:125] found "default-k8s-diff-port-901295" server: "https://192.168.39.193:8444"
	I1210 01:09:01.921394  133282 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:09:01.930263  133282 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.193
	I1210 01:09:01.930291  133282 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:09:01.930304  133282 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:09:01.930352  133282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:01.966094  133282 cri.go:89] found id: ""
	I1210 01:09:01.966195  133282 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:09:01.983212  133282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:09:01.991944  133282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:09:01.991963  133282 kubeadm.go:157] found existing configuration files:
	
	I1210 01:09:01.992011  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 01:09:02.000043  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:09:02.000094  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:09:02.008538  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 01:09:02.016658  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:09:02.016718  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:09:02.025191  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 01:09:02.033198  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:09:02.033235  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:09:02.041713  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 01:09:02.049752  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:09:02.049801  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:09:02.058162  133282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:09:02.067001  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:02.178210  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:59.760246  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:00.260582  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:00.760701  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:01.259686  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:01.759889  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:02.260232  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:02.759769  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:03.259935  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:03.760670  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.260443  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.289731  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:06.291608  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:02.340685  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:02.341201  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:02.341233  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:02.341148  134419 retry.go:31] will retry after 1.149002375s: waiting for machine to come up
	I1210 01:09:03.491439  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:03.491777  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:03.491803  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:03.491742  134419 retry.go:31] will retry after 2.260301884s: waiting for machine to come up
	I1210 01:09:05.753701  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:05.754207  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:05.754245  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:05.754151  134419 retry.go:31] will retry after 2.19021466s: waiting for machine to come up
	I1210 01:09:03.022068  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:03.230465  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:03.288423  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:03.380544  133282 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:09:03.380653  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:03.881388  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.381638  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.881652  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.380981  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.394784  133282 api_server.go:72] duration metric: took 2.014238708s to wait for apiserver process to appear ...
	I1210 01:09:05.394817  133282 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:09:05.394854  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:07.865790  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:07.865818  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:07.865831  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:07.881775  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:07.881807  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:07.894896  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:07.914874  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:07.914905  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:08.395143  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:08.404338  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:08.404370  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:08.895743  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:08.906401  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:08.906439  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:09.394905  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:09.400326  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 200:
	ok
	I1210 01:09:09.411040  133282 api_server.go:141] control plane version: v1.31.2
	I1210 01:09:09.411080  133282 api_server.go:131] duration metric: took 4.016246339s to wait for apiserver health ...
	I1210 01:09:09.411090  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:09:09.411096  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:09.412738  133282 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:09:04.760421  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.260154  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.760313  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:06.259902  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:06.760365  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:07.260060  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:07.759720  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:08.260052  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:08.759734  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:09.260736  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:08.291848  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:10.790539  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:07.946992  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:07.947528  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:07.947561  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:07.947474  134419 retry.go:31] will retry after 3.212306699s: waiting for machine to come up
	I1210 01:09:11.163716  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:11.164132  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:11.164163  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:11.164092  134419 retry.go:31] will retry after 3.275164589s: waiting for machine to come up
	I1210 01:09:09.413907  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:09:09.423631  133282 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:09:09.440030  133282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:09:09.449054  133282 system_pods.go:59] 8 kube-system pods found
	I1210 01:09:09.449081  133282 system_pods.go:61] "coredns-7c65d6cfc9-qbdpj" [eec04b43-145a-4cae-9085-185b573be507] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 01:09:09.449088  133282 system_pods.go:61] "etcd-default-k8s-diff-port-901295" [c8c570b0-2e66-4cf5-bed6-20ee655ad679] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 01:09:09.449100  133282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-901295" [42b2ad48-8b92-4ba4-8a14-6c3e6bdec4a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 01:09:09.449116  133282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-901295" [bd2c0e9d-cb31-46a5-b12e-ab70ed05c8e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 01:09:09.449127  133282 system_pods.go:61] "kube-proxy-5szz9" [957bab4d-6329-41b4-9980-aaa17133201e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 01:09:09.449135  133282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-901295" [1729b062-1bfe-447f-b9ed-29813c7f056a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 01:09:09.449144  133282 system_pods.go:61] "metrics-server-6867b74b74-zpj2g" [cdfb5b8e-5b7f-4fc8-8ad8-07ea92f7f737] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:09:09.449150  133282 system_pods.go:61] "storage-provisioner" [342f814b-f510-4a3b-b27d-52ebbdf85275] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 01:09:09.449159  133282 system_pods.go:74] duration metric: took 9.110007ms to wait for pod list to return data ...
	I1210 01:09:09.449168  133282 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:09:09.452778  133282 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:09:09.452806  133282 node_conditions.go:123] node cpu capacity is 2
	I1210 01:09:09.452818  133282 node_conditions.go:105] duration metric: took 3.643268ms to run NodePressure ...
	I1210 01:09:09.452837  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:09.728171  133282 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 01:09:09.732074  133282 kubeadm.go:739] kubelet initialised
	I1210 01:09:09.732096  133282 kubeadm.go:740] duration metric: took 3.900542ms waiting for restarted kubelet to initialise ...
	I1210 01:09:09.732106  133282 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:09.736406  133282 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.740516  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.740534  133282 pod_ready.go:82] duration metric: took 4.104848ms for pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.740543  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.740549  133282 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.744293  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.744311  133282 pod_ready.go:82] duration metric: took 3.755781ms for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.744321  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.744326  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.748023  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.748045  133282 pod_ready.go:82] duration metric: took 3.712559ms for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.748062  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.748070  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.843581  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.843607  133282 pod_ready.go:82] duration metric: took 95.52817ms for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.843621  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.843632  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5szz9" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:10.242986  133282 pod_ready.go:93] pod "kube-proxy-5szz9" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:10.243015  133282 pod_ready.go:82] duration metric: took 399.37468ms for pod "kube-proxy-5szz9" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:10.243025  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:12.249815  133282 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:09.760075  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:10.259970  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:10.760547  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:11.259999  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:11.760315  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:12.260121  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:12.760217  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:13.259996  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:13.760635  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:14.259738  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:13.290686  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:15.792057  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:14.440802  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.441315  132605 main.go:141] libmachine: (no-preload-584179) Found IP for machine: 192.168.50.169
	I1210 01:09:14.441338  132605 main.go:141] libmachine: (no-preload-584179) Reserving static IP address...
	I1210 01:09:14.441355  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has current primary IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.441776  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "no-preload-584179", mac: "52:54:00:94:5e:a7", ip: "192.168.50.169"} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.441830  132605 main.go:141] libmachine: (no-preload-584179) DBG | skip adding static IP to network mk-no-preload-584179 - found existing host DHCP lease matching {name: "no-preload-584179", mac: "52:54:00:94:5e:a7", ip: "192.168.50.169"}
	I1210 01:09:14.441847  132605 main.go:141] libmachine: (no-preload-584179) Reserved static IP address: 192.168.50.169
	I1210 01:09:14.441867  132605 main.go:141] libmachine: (no-preload-584179) Waiting for SSH to be available...
	I1210 01:09:14.441882  132605 main.go:141] libmachine: (no-preload-584179) DBG | Getting to WaitForSSH function...
	I1210 01:09:14.444063  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.444360  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.444397  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.444510  132605 main.go:141] libmachine: (no-preload-584179) DBG | Using SSH client type: external
	I1210 01:09:14.444531  132605 main.go:141] libmachine: (no-preload-584179) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa (-rw-------)
	I1210 01:09:14.444565  132605 main.go:141] libmachine: (no-preload-584179) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:09:14.444579  132605 main.go:141] libmachine: (no-preload-584179) DBG | About to run SSH command:
	I1210 01:09:14.444594  132605 main.go:141] libmachine: (no-preload-584179) DBG | exit 0
	I1210 01:09:14.571597  132605 main.go:141] libmachine: (no-preload-584179) DBG | SSH cmd err, output: <nil>: 
	I1210 01:09:14.571997  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetConfigRaw
	I1210 01:09:14.572831  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:14.576075  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.576525  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.576559  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.576843  132605 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/config.json ...
	I1210 01:09:14.577023  132605 machine.go:93] provisionDockerMachine start ...
	I1210 01:09:14.577043  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:14.577263  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.579535  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.579894  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.579925  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.580191  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:14.580426  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.580579  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.580742  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:14.580901  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:14.581081  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:14.581092  132605 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:09:14.699453  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:09:14.699485  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:09:14.699734  132605 buildroot.go:166] provisioning hostname "no-preload-584179"
	I1210 01:09:14.699766  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:09:14.700011  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.703169  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.703570  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.703597  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.703742  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:14.703967  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.704170  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.704395  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:14.704582  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:14.704802  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:14.704825  132605 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-584179 && echo "no-preload-584179" | sudo tee /etc/hostname
	I1210 01:09:14.836216  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-584179
	
	I1210 01:09:14.836259  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.839077  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.839502  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.839536  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.839752  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:14.839958  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.840127  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.840304  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:14.840534  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:14.840766  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:14.840793  132605 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-584179' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-584179/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-584179' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:09:14.965138  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:09:14.965175  132605 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:09:14.965246  132605 buildroot.go:174] setting up certificates
	I1210 01:09:14.965268  132605 provision.go:84] configureAuth start
	I1210 01:09:14.965287  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:09:14.965570  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:14.968666  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.969081  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.969116  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.969264  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.971772  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.972144  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.972169  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.972337  132605 provision.go:143] copyHostCerts
	I1210 01:09:14.972403  132605 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:09:14.972428  132605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:09:14.972492  132605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:09:14.972648  132605 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:09:14.972663  132605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:09:14.972698  132605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:09:14.972790  132605 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:09:14.972803  132605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:09:14.972836  132605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:09:14.972915  132605 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.no-preload-584179 san=[127.0.0.1 192.168.50.169 localhost minikube no-preload-584179]
	I1210 01:09:15.113000  132605 provision.go:177] copyRemoteCerts
	I1210 01:09:15.113067  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:09:15.113100  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.115838  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.116216  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.116243  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.116422  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.116590  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.116726  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.116820  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.199896  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:09:15.225440  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 01:09:15.250028  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 01:09:15.274086  132605 provision.go:87] duration metric: took 308.801497ms to configureAuth
	I1210 01:09:15.274127  132605 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:09:15.274298  132605 config.go:182] Loaded profile config "no-preload-584179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:09:15.274390  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.277149  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.277509  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.277539  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.277682  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.277842  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.277999  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.278110  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.278260  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:15.278438  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:15.278454  132605 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:09:15.504997  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:09:15.505080  132605 machine.go:96] duration metric: took 928.040946ms to provisionDockerMachine
	I1210 01:09:15.505103  132605 start.go:293] postStartSetup for "no-preload-584179" (driver="kvm2")
	I1210 01:09:15.505118  132605 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:09:15.505150  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.505498  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:09:15.505532  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.508802  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.509247  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.509324  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.509448  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.509674  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.509840  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.509985  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.597115  132605 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:09:15.602107  132605 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:09:15.602135  132605 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:09:15.602226  132605 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:09:15.602330  132605 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:09:15.602453  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:09:15.611320  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:09:15.633173  132605 start.go:296] duration metric: took 128.055577ms for postStartSetup
	I1210 01:09:15.633214  132605 fix.go:56] duration metric: took 19.474291224s for fixHost
	I1210 01:09:15.633234  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.635888  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.636254  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.636298  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.636472  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.636655  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.636827  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.636941  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.637115  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:15.637284  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:15.637295  132605 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:09:15.746834  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792955.705138377
	
	I1210 01:09:15.746862  132605 fix.go:216] guest clock: 1733792955.705138377
	I1210 01:09:15.746873  132605 fix.go:229] Guest: 2024-12-10 01:09:15.705138377 +0000 UTC Remote: 2024-12-10 01:09:15.6332178 +0000 UTC m=+353.450037611 (delta=71.920577ms)
	I1210 01:09:15.746899  132605 fix.go:200] guest clock delta is within tolerance: 71.920577ms
	I1210 01:09:15.746915  132605 start.go:83] releasing machines lock for "no-preload-584179", held for 19.588029336s
	I1210 01:09:15.746945  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.747285  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:15.750451  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.750900  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.750929  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.751162  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.751698  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.751882  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.751964  132605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:09:15.752035  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.752082  132605 ssh_runner.go:195] Run: cat /version.json
	I1210 01:09:15.752104  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.754825  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755065  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755249  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.755269  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755457  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.755549  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.755585  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755624  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.755718  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.755807  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.755929  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.755997  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.756266  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.756431  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.834820  132605 ssh_runner.go:195] Run: systemctl --version
	I1210 01:09:15.859263  132605 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:09:16.006149  132605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:09:16.012040  132605 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:09:16.012116  132605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:09:16.026410  132605 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:09:16.026435  132605 start.go:495] detecting cgroup driver to use...
	I1210 01:09:16.026508  132605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:09:16.040833  132605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:09:16.053355  132605 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:09:16.053404  132605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:09:16.066169  132605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:09:16.078906  132605 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:09:16.183645  132605 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:09:16.338131  132605 docker.go:233] disabling docker service ...
	I1210 01:09:16.338210  132605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:09:16.353706  132605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:09:16.367025  132605 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:09:16.490857  132605 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:09:16.599213  132605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:09:16.612423  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:09:16.628989  132605 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 01:09:16.629051  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.638381  132605 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:09:16.638443  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.648140  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.657702  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.667303  132605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:09:16.677058  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.686261  132605 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.701267  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.710630  132605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:09:16.719338  132605 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:09:16.719399  132605 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:09:16.730675  132605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:09:16.739704  132605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:16.855267  132605 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:09:16.945551  132605 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:09:16.945636  132605 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:09:16.950041  132605 start.go:563] Will wait 60s for crictl version
	I1210 01:09:16.950089  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:16.953415  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:09:16.986363  132605 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:09:16.986452  132605 ssh_runner.go:195] Run: crio --version
	I1210 01:09:17.013313  132605 ssh_runner.go:195] Run: crio --version
	I1210 01:09:17.040732  132605 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 01:09:17.042078  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:17.044697  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:17.044992  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:17.045017  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:17.045180  132605 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1210 01:09:17.048776  132605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:09:17.059862  132605 kubeadm.go:883] updating cluster {Name:no-preload-584179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-584179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:09:17.059969  132605 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:09:17.060002  132605 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:09:17.092954  132605 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 01:09:17.092981  132605 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 01:09:17.093021  132605 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:17.093063  132605 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.093076  132605 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.093096  132605 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1210 01:09:17.093157  132605 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.093084  132605 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.093235  132605 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.093250  132605 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.094742  132605 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1210 01:09:17.094787  132605 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.094804  132605 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.094742  132605 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.094810  132605 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:17.094753  132605 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.094820  132605 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.094765  132605 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:14.765671  133282 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:15.750454  133282 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:15.750473  133282 pod_ready.go:82] duration metric: took 5.507439947s for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:15.750486  133282 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:14.759976  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:15.259717  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:15.760410  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:16.260034  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:16.759708  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:17.260433  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:17.760687  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:18.260284  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:18.760557  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:19.260362  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:18.290233  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:20.291198  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:17.246846  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.248658  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.250095  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.254067  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.256089  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.278344  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1210 01:09:17.278473  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.369439  132605 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1210 01:09:17.369501  132605 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.369501  132605 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1210 01:09:17.369540  132605 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.369553  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.369604  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.417953  132605 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1210 01:09:17.418006  132605 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.418052  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.423233  132605 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1210 01:09:17.423274  132605 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1210 01:09:17.423281  132605 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.423306  132605 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.423326  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.423352  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.423429  132605 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1210 01:09:17.423469  132605 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.423503  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.505918  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.505973  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.505933  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.506033  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.506057  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.506093  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.622808  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.635839  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.637443  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.637478  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.637587  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.637611  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.688747  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.768097  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.768175  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.768211  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.768320  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.768313  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.805141  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1210 01:09:17.805252  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1210 01:09:17.885468  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1210 01:09:17.885628  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1210 01:09:17.893263  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1210 01:09:17.893312  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1210 01:09:17.893335  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1210 01:09:17.893381  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 01:09:17.893399  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1210 01:09:17.893411  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 01:09:17.893417  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 01:09:17.893464  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1210 01:09:17.893479  132605 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1210 01:09:17.893454  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 01:09:17.893518  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1210 01:09:17.895148  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1210 01:09:18.009923  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:21.497870  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.604325674s)
	I1210 01:09:21.497908  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1210 01:09:21.497931  132605 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1210 01:09:21.497925  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (3.604515411s)
	I1210 01:09:21.497964  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2: (3.604523853s)
	I1210 01:09:21.497980  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1210 01:09:21.497988  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1210 01:09:21.497968  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1210 01:09:21.498030  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (3.604504871s)
	I1210 01:09:21.498065  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1210 01:09:21.498092  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (3.604626001s)
	I1210 01:09:21.498135  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1210 01:09:21.498137  132605 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.48818734s)
	I1210 01:09:21.498180  132605 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 01:09:21.498210  132605 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:21.498262  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.758044  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:20.257446  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:19.759901  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:20.260224  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:20.760523  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:21.259846  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:21.759997  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:22.259939  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:22.760414  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:23.260359  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:23.760075  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:24.260519  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:22.291428  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:24.291612  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:26.791400  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:23.369885  132605 ssh_runner.go:235] Completed: which crictl: (1.871582184s)
	I1210 01:09:23.369947  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.871938064s)
	I1210 01:09:23.369967  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1210 01:09:23.369976  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:23.370000  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 01:09:23.370042  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 01:09:25.661942  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.291860829s)
	I1210 01:09:25.661984  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1210 01:09:25.661990  132605 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.291995779s)
	I1210 01:09:25.662011  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 01:09:25.662066  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 01:09:25.662066  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:27.025354  132605 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.36318975s)
	I1210 01:09:27.025446  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:27.025517  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.363423006s)
	I1210 01:09:27.025546  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1210 01:09:27.025604  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 01:09:27.025677  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 01:09:27.063571  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 01:09:27.063700  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 01:09:22.757215  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:24.757584  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:27.256535  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:24.760537  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:25.259994  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:25.760205  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:26.260504  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:26.759648  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:27.259995  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:27.760383  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:28.259992  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:28.760004  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:29.260496  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:28.813963  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:30.837175  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:29.106253  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.080542846s)
	I1210 01:09:29.106295  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1210 01:09:29.106312  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.042586527s)
	I1210 01:09:29.106326  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 01:09:29.106345  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1210 01:09:29.106392  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 01:09:30.968622  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.862203504s)
	I1210 01:09:30.968650  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1210 01:09:30.968679  132605 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 01:09:30.968732  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 01:09:31.612519  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 01:09:31.612559  132605 cache_images.go:123] Successfully loaded all cached images
	I1210 01:09:31.612564  132605 cache_images.go:92] duration metric: took 14.519573158s to LoadCachedImages
	I1210 01:09:31.612577  132605 kubeadm.go:934] updating node { 192.168.50.169 8443 v1.31.2 crio true true} ...
	I1210 01:09:31.612686  132605 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-584179 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-584179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:09:31.612750  132605 ssh_runner.go:195] Run: crio config
	I1210 01:09:31.661155  132605 cni.go:84] Creating CNI manager for ""
	I1210 01:09:31.661185  132605 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:31.661199  132605 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:09:31.661228  132605 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.169 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-584179 NodeName:no-preload-584179 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.169"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.169 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 01:09:31.661406  132605 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.169
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-584179"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.169"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.169"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:09:31.661511  132605 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 01:09:31.671185  132605 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:09:31.671259  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:09:31.679776  132605 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1210 01:09:31.694290  132605 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:09:31.708644  132605 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1210 01:09:31.725292  132605 ssh_runner.go:195] Run: grep 192.168.50.169	control-plane.minikube.internal$ /etc/hosts
	I1210 01:09:31.729070  132605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.169	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:09:31.740077  132605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:31.857074  132605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:09:31.872257  132605 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179 for IP: 192.168.50.169
	I1210 01:09:31.872280  132605 certs.go:194] generating shared ca certs ...
	I1210 01:09:31.872314  132605 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:31.872515  132605 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:09:31.872569  132605 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:09:31.872579  132605 certs.go:256] generating profile certs ...
	I1210 01:09:31.872694  132605 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/client.key
	I1210 01:09:31.872775  132605 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/apiserver.key.0a939830
	I1210 01:09:31.872828  132605 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/proxy-client.key
	I1210 01:09:31.872979  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:09:31.873020  132605 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:09:31.873034  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:09:31.873069  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:09:31.873098  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:09:31.873127  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:09:31.873188  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:09:31.874099  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:09:31.906792  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:09:31.939994  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:09:31.965628  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:09:31.992020  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 01:09:32.015601  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 01:09:32.048113  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:09:32.069416  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 01:09:32.090144  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:09:32.111484  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:09:32.135390  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:09:32.157978  132605 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:09:32.173851  132605 ssh_runner.go:195] Run: openssl version
	I1210 01:09:32.179068  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:09:32.188602  132605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:09:32.192585  132605 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:09:32.192629  132605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:09:32.197637  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:09:32.207401  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:09:32.216700  132605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:29.756368  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:31.756948  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:29.760244  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:30.260534  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:30.760426  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:31.259767  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:31.759951  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:32.259919  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:32.760161  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:33.260272  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:33.759885  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:34.259970  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:33.290818  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:35.790889  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:32.220620  132605 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:32.220663  132605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:32.225661  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:09:32.235325  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:09:32.244746  132605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:09:32.248733  132605 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:09:32.248774  132605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:09:32.254022  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:09:32.264208  132605 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:09:32.268332  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:09:32.273902  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:09:32.279525  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:09:32.284958  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:09:32.291412  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:09:32.296527  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:09:32.302123  132605 kubeadm.go:392] StartCluster: {Name:no-preload-584179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-584179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:09:32.302233  132605 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:09:32.302293  132605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:32.345135  132605 cri.go:89] found id: ""
	I1210 01:09:32.345212  132605 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:09:32.355077  132605 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:09:32.355093  132605 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:09:32.355131  132605 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:09:32.364021  132605 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:09:32.365012  132605 kubeconfig.go:125] found "no-preload-584179" server: "https://192.168.50.169:8443"
	I1210 01:09:32.367348  132605 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:09:32.375938  132605 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.169
	I1210 01:09:32.375967  132605 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:09:32.375979  132605 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:09:32.376032  132605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:32.408948  132605 cri.go:89] found id: ""
	I1210 01:09:32.409014  132605 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:09:32.427628  132605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:09:32.437321  132605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:09:32.437348  132605 kubeadm.go:157] found existing configuration files:
	
	I1210 01:09:32.437391  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:09:32.446114  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:09:32.446155  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:09:32.455531  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:09:32.465558  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:09:32.465611  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:09:32.475265  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:09:32.483703  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:09:32.483750  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:09:32.492041  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:09:32.499895  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:09:32.499948  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:09:32.508205  132605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:09:32.516625  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:32.628252  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:33.675979  132605 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.04768244s)
	I1210 01:09:33.676029  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:33.873465  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:33.951722  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:34.064512  132605 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:09:34.064627  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:34.565753  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.065163  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.104915  132605 api_server.go:72] duration metric: took 1.040405424s to wait for apiserver process to appear ...
	I1210 01:09:35.104944  132605 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:09:35.104970  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:35.105426  132605 api_server.go:269] stopped: https://192.168.50.169:8443/healthz: Get "https://192.168.50.169:8443/healthz": dial tcp 192.168.50.169:8443: connect: connection refused
	I1210 01:09:35.606063  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:34.256982  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:36.756184  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:38.326687  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:38.326719  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:38.326736  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:38.400207  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:38.400236  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:38.605572  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:38.610811  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:38.610849  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:39.105424  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:39.117268  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:39.117303  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:39.605417  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:39.614444  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 200:
	ok
	I1210 01:09:39.620993  132605 api_server.go:141] control plane version: v1.31.2
	I1210 01:09:39.621020  132605 api_server.go:131] duration metric: took 4.51606815s to wait for apiserver health ...
	I1210 01:09:39.621032  132605 cni.go:84] Creating CNI manager for ""
	I1210 01:09:39.621041  132605 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:34.759835  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.260276  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.759791  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:36.259684  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:36.760649  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:37.259922  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:37.760558  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:38.260712  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:38.759679  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:39.259678  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:39.622539  132605 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:09:39.623685  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:09:39.643844  132605 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:09:39.678622  132605 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:09:39.692082  132605 system_pods.go:59] 8 kube-system pods found
	I1210 01:09:39.692124  132605 system_pods.go:61] "coredns-7c65d6cfc9-hhsm5" [dddb227a-7c16-4acd-be5f-1ab38b78129c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 01:09:39.692133  132605 system_pods.go:61] "etcd-no-preload-584179" [acba2a13-196f-4ff9-8151-6b88578d532d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 01:09:39.692141  132605 system_pods.go:61] "kube-apiserver-no-preload-584179" [20a67076-4ff2-4f31-b245-bf4079cd11d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 01:09:39.692149  132605 system_pods.go:61] "kube-controller-manager-no-preload-584179" [d7a2e531-1609-4c8a-b756-d9819339ed27] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 01:09:39.692154  132605 system_pods.go:61] "kube-proxy-xcjs2" [ec6cf5b1-3ea9-4868-874d-61e262cca0c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 01:09:39.692162  132605 system_pods.go:61] "kube-scheduler-no-preload-584179" [998543da-3056-4960-8762-9ab3dbd1925a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 01:09:39.692174  132605 system_pods.go:61] "metrics-server-6867b74b74-lwgxd" [0e7f1063-8508-4f5b-b8ff-bbd387a53919] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:09:39.692183  132605 system_pods.go:61] "storage-provisioner" [31180637-f48e-4dda-8ec3-56155bb300cf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 01:09:39.692200  132605 system_pods.go:74] duration metric: took 13.548523ms to wait for pod list to return data ...
	I1210 01:09:39.692214  132605 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:09:39.696707  132605 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:09:39.696740  132605 node_conditions.go:123] node cpu capacity is 2
	I1210 01:09:39.696754  132605 node_conditions.go:105] duration metric: took 4.534393ms to run NodePressure ...
	I1210 01:09:39.696781  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:39.977595  132605 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 01:09:39.981694  132605 kubeadm.go:739] kubelet initialised
	I1210 01:09:39.981714  132605 kubeadm.go:740] duration metric: took 4.094235ms waiting for restarted kubelet to initialise ...
	I1210 01:09:39.981724  132605 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:39.987484  132605 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:39.992414  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.992434  132605 pod_ready.go:82] duration metric: took 4.925954ms for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:39.992442  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.992448  132605 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:39.996262  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "etcd-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.996291  132605 pod_ready.go:82] duration metric: took 3.826925ms for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:39.996301  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "etcd-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.996309  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.000642  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-apiserver-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.000659  132605 pod_ready.go:82] duration metric: took 4.340955ms for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.000668  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-apiserver-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.000676  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.082165  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.082191  132605 pod_ready.go:82] duration metric: took 81.505218ms for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.082204  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.082214  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.483273  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-proxy-xcjs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.483306  132605 pod_ready.go:82] duration metric: took 401.082947ms for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.483318  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-proxy-xcjs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.483329  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.882587  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-scheduler-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.882617  132605 pod_ready.go:82] duration metric: took 399.278598ms for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.882629  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-scheduler-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.882641  132605 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:41.281474  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:41.281502  132605 pod_ready.go:82] duration metric: took 398.850415ms for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:41.281516  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:41.281526  132605 pod_ready.go:39] duration metric: took 1.299793175s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:41.281547  132605 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 01:09:41.293293  132605 ops.go:34] apiserver oom_adj: -16
	I1210 01:09:41.293310  132605 kubeadm.go:597] duration metric: took 8.938211553s to restartPrimaryControlPlane
	I1210 01:09:41.293318  132605 kubeadm.go:394] duration metric: took 8.991203373s to StartCluster
	I1210 01:09:41.293334  132605 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:41.293389  132605 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:09:41.295054  132605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:41.295293  132605 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 01:09:41.295376  132605 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 01:09:41.295496  132605 addons.go:69] Setting storage-provisioner=true in profile "no-preload-584179"
	I1210 01:09:41.295519  132605 addons.go:234] Setting addon storage-provisioner=true in "no-preload-584179"
	W1210 01:09:41.295529  132605 addons.go:243] addon storage-provisioner should already be in state true
	I1210 01:09:41.295527  132605 config.go:182] Loaded profile config "no-preload-584179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:09:41.295581  132605 host.go:66] Checking if "no-preload-584179" exists ...
	I1210 01:09:41.295588  132605 addons.go:69] Setting metrics-server=true in profile "no-preload-584179"
	I1210 01:09:41.295602  132605 addons.go:234] Setting addon metrics-server=true in "no-preload-584179"
	I1210 01:09:41.295604  132605 addons.go:69] Setting default-storageclass=true in profile "no-preload-584179"
	W1210 01:09:41.295615  132605 addons.go:243] addon metrics-server should already be in state true
	I1210 01:09:41.295627  132605 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-584179"
	I1210 01:09:41.295643  132605 host.go:66] Checking if "no-preload-584179" exists ...
	I1210 01:09:41.295906  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.295951  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.296035  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.296052  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.296089  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.296134  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.296994  132605 out.go:177] * Verifying Kubernetes components...
	I1210 01:09:41.298351  132605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:41.312841  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32935
	I1210 01:09:41.313326  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.313883  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.313906  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.314202  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.314798  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.314846  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.316718  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45347
	I1210 01:09:41.317263  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.317829  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.317857  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.318269  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.318870  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.318916  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.329929  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45969
	I1210 01:09:41.330341  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.330879  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.330894  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.331331  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.331505  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.332041  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43911
	I1210 01:09:41.332457  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.333084  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.333107  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.333516  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.333728  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.335268  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42073
	I1210 01:09:41.336123  132605 addons.go:234] Setting addon default-storageclass=true in "no-preload-584179"
	W1210 01:09:41.336137  132605 addons.go:243] addon default-storageclass should already be in state true
	I1210 01:09:41.336161  132605 host.go:66] Checking if "no-preload-584179" exists ...
	I1210 01:09:41.336395  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.336422  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.336596  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.336686  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:41.337074  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.337088  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.337468  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.337656  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.338494  132605 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:41.339130  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:41.339843  132605 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:09:41.339856  132605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 01:09:41.339870  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:41.341253  132605 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 01:09:37.793895  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:40.291282  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:41.342436  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.342604  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 01:09:41.342620  132605 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 01:09:41.342633  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:41.342844  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:41.342861  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.343122  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:41.343399  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:41.343569  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:41.343683  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:41.345344  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.345814  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:41.345834  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.345982  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:41.346159  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:41.346293  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:41.346431  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:41.352593  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38283
	I1210 01:09:41.352930  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.353292  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.353307  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.353545  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.354016  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.354045  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.370168  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46765
	I1210 01:09:41.370736  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.371289  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.371315  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.371670  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.371879  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.373679  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:41.374802  132605 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 01:09:41.374821  132605 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 01:09:41.374841  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:41.377611  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.378065  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:41.378089  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.378261  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:41.378411  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:41.378571  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:41.378711  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:41.492956  132605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:09:41.510713  132605 node_ready.go:35] waiting up to 6m0s for node "no-preload-584179" to be "Ready" ...
	I1210 01:09:41.591523  132605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:09:41.612369  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 01:09:41.612393  132605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 01:09:41.641040  132605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 01:09:41.672955  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 01:09:41.672982  132605 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 01:09:41.720885  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:09:41.720921  132605 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 01:09:41.773885  132605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:09:39.256804  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:41.758321  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:42.945125  132605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.304042618s)
	I1210 01:09:42.945192  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945207  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945233  132605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.171304002s)
	I1210 01:09:42.945292  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945310  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945452  132605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.353900883s)
	I1210 01:09:42.945476  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945488  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945543  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.945556  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.945587  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.945601  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.945609  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945616  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945819  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.945847  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.945832  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.945856  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945863  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945897  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.945907  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.945916  132605 addons.go:475] Verifying addon metrics-server=true in "no-preload-584179"
	I1210 01:09:42.945926  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.946083  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.946115  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.946120  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.946659  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.946679  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.946690  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.946699  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.946960  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.946976  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.954783  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.954805  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.955037  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.955056  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.955101  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.956592  132605 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1210 01:09:39.759613  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:40.260466  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:40.760527  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:41.260450  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:41.759950  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:42.260075  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:42.760661  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:43.259780  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:43.759690  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:44.260376  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:42.791249  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:45.290804  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:42.957891  132605 addons.go:510] duration metric: took 1.66252058s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I1210 01:09:43.514278  132605 node_ready.go:53] node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:45.514855  132605 node_ready.go:53] node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:44.256730  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:46.257699  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:44.759802  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:45.260533  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:45.760410  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:45.760500  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:45.797499  133241 cri.go:89] found id: ""
	I1210 01:09:45.797522  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.797533  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:45.797539  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:45.797596  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:45.827841  133241 cri.go:89] found id: ""
	I1210 01:09:45.827872  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.827885  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:45.827893  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:45.827952  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:45.861227  133241 cri.go:89] found id: ""
	I1210 01:09:45.861251  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.861259  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:45.861264  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:45.861331  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:45.895142  133241 cri.go:89] found id: ""
	I1210 01:09:45.895174  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.895185  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:45.895191  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:45.895266  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:45.931113  133241 cri.go:89] found id: ""
	I1210 01:09:45.931146  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.931157  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:45.931164  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:45.931251  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:45.964348  133241 cri.go:89] found id: ""
	I1210 01:09:45.964388  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.964396  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:45.964402  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:45.964453  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:45.997808  133241 cri.go:89] found id: ""
	I1210 01:09:45.997829  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.997837  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:45.997842  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:45.997888  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:46.028464  133241 cri.go:89] found id: ""
	I1210 01:09:46.028490  133241 logs.go:282] 0 containers: []
	W1210 01:09:46.028499  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:46.028508  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:46.028524  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:46.136225  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:46.136257  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:46.136275  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:46.211654  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:46.211686  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:46.254008  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:46.254046  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:46.305985  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:46.306020  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:48.818889  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:48.831511  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:48.831575  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:48.863536  133241 cri.go:89] found id: ""
	I1210 01:09:48.863566  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.863577  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:48.863585  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:48.863642  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:48.895340  133241 cri.go:89] found id: ""
	I1210 01:09:48.895362  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.895371  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:48.895378  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:48.895439  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:48.930962  133241 cri.go:89] found id: ""
	I1210 01:09:48.930989  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.930997  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:48.931003  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:48.931060  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:48.966437  133241 cri.go:89] found id: ""
	I1210 01:09:48.966467  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.966479  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:48.966488  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:48.966553  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:49.001290  133241 cri.go:89] found id: ""
	I1210 01:09:49.001321  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.001333  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:49.001340  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:49.001404  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:49.036472  133241 cri.go:89] found id: ""
	I1210 01:09:49.036499  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.036510  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:49.036532  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:49.036609  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:49.066550  133241 cri.go:89] found id: ""
	I1210 01:09:49.066589  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.066600  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:49.066607  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:49.066669  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:49.097358  133241 cri.go:89] found id: ""
	I1210 01:09:49.097383  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.097392  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:49.097402  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:49.097413  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:49.170082  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:49.170116  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:49.209684  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:49.209747  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:49.268714  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:49.268755  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:49.281979  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:49.282014  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:49.350901  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:47.790228  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:49.791158  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:48.014087  132605 node_ready.go:53] node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:49.014932  132605 node_ready.go:49] node "no-preload-584179" has status "Ready":"True"
	I1210 01:09:49.014960  132605 node_ready.go:38] duration metric: took 7.504211405s for node "no-preload-584179" to be "Ready" ...
	I1210 01:09:49.014974  132605 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:49.020519  132605 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:49.025466  132605 pod_ready.go:93] pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:49.025489  132605 pod_ready.go:82] duration metric: took 4.945455ms for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:49.025501  132605 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.031580  132605 pod_ready.go:103] pod "etcd-no-preload-584179" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:51.532544  132605 pod_ready.go:93] pod "etcd-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.532570  132605 pod_ready.go:82] duration metric: took 2.507060173s for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.532582  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.537498  132605 pod_ready.go:93] pod "kube-apiserver-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.537516  132605 pod_ready.go:82] duration metric: took 4.927374ms for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.537525  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.542147  132605 pod_ready.go:93] pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.542161  132605 pod_ready.go:82] duration metric: took 4.630752ms for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.542169  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.546645  132605 pod_ready.go:93] pod "kube-proxy-xcjs2" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.546660  132605 pod_ready.go:82] duration metric: took 4.486291ms for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.546667  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.815308  132605 pod_ready.go:93] pod "kube-scheduler-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.815333  132605 pod_ready.go:82] duration metric: took 268.661005ms for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.815343  132605 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:48.756571  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:51.256434  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:51.851559  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:51.864804  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:51.864862  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:51.907102  133241 cri.go:89] found id: ""
	I1210 01:09:51.907141  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.907154  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:51.907162  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:51.907218  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:51.937672  133241 cri.go:89] found id: ""
	I1210 01:09:51.937695  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.937702  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:51.937708  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:51.937755  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:51.966886  133241 cri.go:89] found id: ""
	I1210 01:09:51.966911  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.966919  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:51.966925  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:51.966981  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:51.996806  133241 cri.go:89] found id: ""
	I1210 01:09:51.996830  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.996838  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:51.996844  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:51.996901  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:52.028041  133241 cri.go:89] found id: ""
	I1210 01:09:52.028083  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.028091  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:52.028097  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:52.028150  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:52.057921  133241 cri.go:89] found id: ""
	I1210 01:09:52.057946  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.057954  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:52.057960  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:52.058010  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:52.088367  133241 cri.go:89] found id: ""
	I1210 01:09:52.088406  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.088415  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:52.088422  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:52.088487  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:52.117636  133241 cri.go:89] found id: ""
	I1210 01:09:52.117667  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.117679  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:52.117691  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:52.117705  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:52.151628  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:52.151655  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:52.202083  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:52.202116  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:52.214973  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:52.215009  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:52.282101  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:52.282126  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:52.282139  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:52.290617  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:54.790008  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:56.790504  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:53.820512  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:55.824852  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:53.258005  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:55.755992  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:54.862326  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:54.874349  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:54.874418  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:54.906983  133241 cri.go:89] found id: ""
	I1210 01:09:54.907006  133241 logs.go:282] 0 containers: []
	W1210 01:09:54.907013  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:54.907019  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:54.907069  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:54.938187  133241 cri.go:89] found id: ""
	I1210 01:09:54.938213  133241 logs.go:282] 0 containers: []
	W1210 01:09:54.938221  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:54.938226  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:54.938290  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:54.974481  133241 cri.go:89] found id: ""
	I1210 01:09:54.974514  133241 logs.go:282] 0 containers: []
	W1210 01:09:54.974526  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:54.974534  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:54.974619  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:55.005904  133241 cri.go:89] found id: ""
	I1210 01:09:55.005928  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.005941  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:55.005949  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:55.006015  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:55.037698  133241 cri.go:89] found id: ""
	I1210 01:09:55.037729  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.037741  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:55.037748  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:55.037816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:55.067926  133241 cri.go:89] found id: ""
	I1210 01:09:55.067958  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.067966  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:55.067971  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:55.068016  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:55.098309  133241 cri.go:89] found id: ""
	I1210 01:09:55.098333  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.098341  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:55.098349  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:55.098400  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:55.145177  133241 cri.go:89] found id: ""
	I1210 01:09:55.145212  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.145221  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:55.145231  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:55.145243  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:55.193307  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:55.193338  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:55.205536  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:55.205558  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:55.271248  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:55.271276  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:55.271295  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:55.349465  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:55.349503  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:57.887749  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:57.899698  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:57.899765  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:57.933170  133241 cri.go:89] found id: ""
	I1210 01:09:57.933196  133241 logs.go:282] 0 containers: []
	W1210 01:09:57.933206  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:57.933214  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:57.933282  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:57.964237  133241 cri.go:89] found id: ""
	I1210 01:09:57.964271  133241 logs.go:282] 0 containers: []
	W1210 01:09:57.964284  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:57.964292  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:57.964360  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:57.996447  133241 cri.go:89] found id: ""
	I1210 01:09:57.996481  133241 logs.go:282] 0 containers: []
	W1210 01:09:57.996493  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:57.996501  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:57.996562  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:58.030007  133241 cri.go:89] found id: ""
	I1210 01:09:58.030034  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.030046  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:58.030054  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:58.030120  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:58.063634  133241 cri.go:89] found id: ""
	I1210 01:09:58.063667  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.063678  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:58.063686  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:58.063748  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:58.095076  133241 cri.go:89] found id: ""
	I1210 01:09:58.095105  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.095114  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:58.095120  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:58.095177  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:58.127107  133241 cri.go:89] found id: ""
	I1210 01:09:58.127147  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.127160  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:58.127169  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:58.127243  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:58.158137  133241 cri.go:89] found id: ""
	I1210 01:09:58.158167  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.158177  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:58.158190  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:58.158213  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:58.209195  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:58.209236  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:58.221816  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:58.221841  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:58.290396  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:58.290416  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:58.290430  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:58.370235  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:58.370265  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:58.791561  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:01.290503  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:58.321571  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:00.322349  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:58.256526  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:00.756754  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:00.908076  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:00.920898  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:00.920985  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:00.955432  133241 cri.go:89] found id: ""
	I1210 01:10:00.955469  133241 logs.go:282] 0 containers: []
	W1210 01:10:00.955481  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:00.955490  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:00.955550  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:00.987580  133241 cri.go:89] found id: ""
	I1210 01:10:00.987606  133241 logs.go:282] 0 containers: []
	W1210 01:10:00.987615  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:00.987621  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:00.987670  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:01.018741  133241 cri.go:89] found id: ""
	I1210 01:10:01.018766  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.018773  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:01.018781  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:01.018840  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:01.049817  133241 cri.go:89] found id: ""
	I1210 01:10:01.049849  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.049860  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:01.049879  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:01.049946  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:01.081736  133241 cri.go:89] found id: ""
	I1210 01:10:01.081765  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.081775  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:01.081781  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:01.081829  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:01.110990  133241 cri.go:89] found id: ""
	I1210 01:10:01.111015  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.111026  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:01.111034  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:01.111096  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:01.140737  133241 cri.go:89] found id: ""
	I1210 01:10:01.140767  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.140777  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:01.140785  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:01.140848  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:01.170628  133241 cri.go:89] found id: ""
	I1210 01:10:01.170662  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.170674  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:01.170686  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:01.170701  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:01.222358  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:01.222389  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:01.235640  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:01.235668  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:01.302726  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:01.302745  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:01.302762  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:01.383817  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:01.383855  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:03.921112  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:03.933517  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:03.933592  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:03.967318  133241 cri.go:89] found id: ""
	I1210 01:10:03.967344  133241 logs.go:282] 0 containers: []
	W1210 01:10:03.967353  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:03.967358  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:03.967411  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:03.998743  133241 cri.go:89] found id: ""
	I1210 01:10:03.998768  133241 logs.go:282] 0 containers: []
	W1210 01:10:03.998776  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:03.998782  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:03.998842  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:04.033209  133241 cri.go:89] found id: ""
	I1210 01:10:04.033235  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.033247  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:04.033255  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:04.033319  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:04.064815  133241 cri.go:89] found id: ""
	I1210 01:10:04.064845  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.064857  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:04.064864  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:04.064921  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:04.098676  133241 cri.go:89] found id: ""
	I1210 01:10:04.098699  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.098707  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:04.098712  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:04.098763  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:04.129693  133241 cri.go:89] found id: ""
	I1210 01:10:04.129720  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.129732  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:04.129741  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:04.129809  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:04.162158  133241 cri.go:89] found id: ""
	I1210 01:10:04.162195  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.162203  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:04.162209  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:04.162276  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:04.194376  133241 cri.go:89] found id: ""
	I1210 01:10:04.194425  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.194436  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:04.194446  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:04.194462  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:04.246674  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:04.246702  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:04.259142  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:04.259169  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:04.330034  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:04.330054  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:04.330067  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:04.410042  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:04.410089  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:03.790690  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:06.290723  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:02.821628  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:04.822691  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:06.823821  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:03.256410  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:05.756520  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:06.948623  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:06.960727  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:06.960811  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:06.993176  133241 cri.go:89] found id: ""
	I1210 01:10:06.993217  133241 logs.go:282] 0 containers: []
	W1210 01:10:06.993226  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:06.993231  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:06.993285  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:07.026420  133241 cri.go:89] found id: ""
	I1210 01:10:07.026449  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.026462  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:07.026469  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:07.026541  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:07.060810  133241 cri.go:89] found id: ""
	I1210 01:10:07.060837  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.060847  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:07.060855  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:07.060921  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:07.091336  133241 cri.go:89] found id: ""
	I1210 01:10:07.091376  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.091386  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:07.091393  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:07.091510  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:07.122715  133241 cri.go:89] found id: ""
	I1210 01:10:07.122750  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.122762  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:07.122770  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:07.122822  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:07.154444  133241 cri.go:89] found id: ""
	I1210 01:10:07.154479  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.154490  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:07.154496  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:07.154575  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:07.189571  133241 cri.go:89] found id: ""
	I1210 01:10:07.189601  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.189614  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:07.189622  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:07.189683  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:07.224455  133241 cri.go:89] found id: ""
	I1210 01:10:07.224480  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.224489  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:07.224499  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:07.224512  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:07.240174  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:07.240214  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:07.344027  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:07.344062  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:07.344079  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:07.445219  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:07.445263  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:07.483205  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:07.483238  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:08.291335  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:10.789606  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:09.321098  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:11.321721  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:08.256670  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:10.256954  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:12.257117  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:10.034238  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:10.047042  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:10.047105  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:10.078622  133241 cri.go:89] found id: ""
	I1210 01:10:10.078654  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.078666  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:10.078675  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:10.078737  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:10.109353  133241 cri.go:89] found id: ""
	I1210 01:10:10.109379  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.109390  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:10.109398  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:10.109470  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:10.143036  133241 cri.go:89] found id: ""
	I1210 01:10:10.143065  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.143077  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:10.143084  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:10.143150  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:10.174938  133241 cri.go:89] found id: ""
	I1210 01:10:10.174966  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.174975  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:10.174981  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:10.175032  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:10.208680  133241 cri.go:89] found id: ""
	I1210 01:10:10.208709  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.208718  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:10.208724  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:10.208793  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:10.241153  133241 cri.go:89] found id: ""
	I1210 01:10:10.241189  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.241202  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:10.241213  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:10.241290  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:10.279405  133241 cri.go:89] found id: ""
	I1210 01:10:10.279437  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.279448  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:10.279457  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:10.279523  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:10.317915  133241 cri.go:89] found id: ""
	I1210 01:10:10.317943  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.317953  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:10.317964  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:10.317980  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:10.370920  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:10.370955  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:10.385823  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:10.385867  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:10.452746  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:10.452774  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:10.452793  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:10.535218  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:10.535291  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:13.075172  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:13.090707  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:13.090785  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:13.141780  133241 cri.go:89] found id: ""
	I1210 01:10:13.141804  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.141812  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:13.141818  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:13.141869  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:13.172241  133241 cri.go:89] found id: ""
	I1210 01:10:13.172263  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.172271  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:13.172277  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:13.172339  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:13.200378  133241 cri.go:89] found id: ""
	I1210 01:10:13.200401  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.200410  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:13.200415  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:13.200472  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:13.232921  133241 cri.go:89] found id: ""
	I1210 01:10:13.232952  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.232964  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:13.232972  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:13.233088  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:13.265305  133241 cri.go:89] found id: ""
	I1210 01:10:13.265333  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.265344  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:13.265352  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:13.265411  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:13.299192  133241 cri.go:89] found id: ""
	I1210 01:10:13.299216  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.299226  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:13.299233  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:13.299306  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:13.332156  133241 cri.go:89] found id: ""
	I1210 01:10:13.332184  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.332195  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:13.332202  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:13.332259  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:13.365450  133241 cri.go:89] found id: ""
	I1210 01:10:13.365484  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.365498  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:13.365511  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:13.365529  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:13.440807  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:13.440849  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:13.477283  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:13.477325  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:13.527481  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:13.527514  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:13.540146  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:13.540178  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:13.602711  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:12.790714  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:15.290963  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:13.820293  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:15.821845  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:14.755454  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:16.756574  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:16.103789  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:16.116124  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:16.116204  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:16.153057  133241 cri.go:89] found id: ""
	I1210 01:10:16.153082  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.153102  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:16.153109  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:16.153162  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:16.186489  133241 cri.go:89] found id: ""
	I1210 01:10:16.186517  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.186528  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:16.186535  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:16.186613  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:16.216369  133241 cri.go:89] found id: ""
	I1210 01:10:16.216404  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.216415  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:16.216423  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:16.216482  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:16.246254  133241 cri.go:89] found id: ""
	I1210 01:10:16.246282  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.246292  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:16.246299  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:16.246361  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:16.277815  133241 cri.go:89] found id: ""
	I1210 01:10:16.277844  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.277855  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:16.277866  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:16.277931  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:16.312101  133241 cri.go:89] found id: ""
	I1210 01:10:16.312132  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.312141  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:16.312147  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:16.312202  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:16.350273  133241 cri.go:89] found id: ""
	I1210 01:10:16.350299  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.350307  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:16.350313  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:16.350376  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:16.388091  133241 cri.go:89] found id: ""
	I1210 01:10:16.388113  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.388121  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:16.388130  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:16.388150  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:16.456039  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:16.456066  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:16.456085  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:16.534919  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:16.534950  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:16.581598  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:16.581639  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:16.631479  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:16.631515  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:19.143852  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:19.156229  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:19.156300  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:19.186482  133241 cri.go:89] found id: ""
	I1210 01:10:19.186506  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.186514  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:19.186521  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:19.186585  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:19.216945  133241 cri.go:89] found id: ""
	I1210 01:10:19.216967  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.216975  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:19.216983  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:19.217060  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:19.247628  133241 cri.go:89] found id: ""
	I1210 01:10:19.247656  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.247666  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:19.247672  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:19.247719  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:19.281256  133241 cri.go:89] found id: ""
	I1210 01:10:19.281287  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.281297  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:19.281303  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:19.281364  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:19.315123  133241 cri.go:89] found id: ""
	I1210 01:10:19.315156  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.315168  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:19.315176  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:19.315246  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:19.349687  133241 cri.go:89] found id: ""
	I1210 01:10:19.349714  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.349725  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:19.349733  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:19.349797  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:19.381019  133241 cri.go:89] found id: ""
	I1210 01:10:19.381046  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.381058  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:19.381065  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:19.381129  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:19.413983  133241 cri.go:89] found id: ""
	I1210 01:10:19.414023  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.414035  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:19.414048  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:19.414063  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:19.453812  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:19.453848  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:19.504016  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:19.504049  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:19.517665  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:19.517695  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:19.583777  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:19.583807  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:19.583825  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:17.790792  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:20.290934  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:17.821893  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:20.320787  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:19.256192  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:21.256740  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:22.160219  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:22.172908  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:22.172984  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:22.203634  133241 cri.go:89] found id: ""
	I1210 01:10:22.203665  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.203680  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:22.203689  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:22.203754  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:22.233632  133241 cri.go:89] found id: ""
	I1210 01:10:22.233660  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.233671  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:22.233679  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:22.233748  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:22.269679  133241 cri.go:89] found id: ""
	I1210 01:10:22.269704  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.269713  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:22.269719  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:22.269769  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:22.301819  133241 cri.go:89] found id: ""
	I1210 01:10:22.301850  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.301858  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:22.301864  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:22.301914  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:22.337435  133241 cri.go:89] found id: ""
	I1210 01:10:22.337470  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.337479  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:22.337494  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:22.337562  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:22.368920  133241 cri.go:89] found id: ""
	I1210 01:10:22.368944  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.368952  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:22.368957  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:22.369020  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:22.401157  133241 cri.go:89] found id: ""
	I1210 01:10:22.401188  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.401200  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:22.401211  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:22.401277  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:22.436278  133241 cri.go:89] found id: ""
	I1210 01:10:22.436317  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.436330  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:22.436343  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:22.436359  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:22.485320  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:22.485354  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:22.498225  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:22.498253  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:22.559918  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:22.559944  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:22.559961  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:22.636884  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:22.636919  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:22.291705  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:24.790056  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:26.790414  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:22.322051  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:24.821800  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:23.756797  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:25.757544  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:25.173302  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:25.185398  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:25.185481  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:25.215003  133241 cri.go:89] found id: ""
	I1210 01:10:25.215030  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.215038  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:25.215044  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:25.215106  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:25.247583  133241 cri.go:89] found id: ""
	I1210 01:10:25.247604  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.247613  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:25.247620  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:25.247679  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:25.282125  133241 cri.go:89] found id: ""
	I1210 01:10:25.282150  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.282158  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:25.282163  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:25.282220  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:25.317560  133241 cri.go:89] found id: ""
	I1210 01:10:25.317590  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.317599  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:25.317605  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:25.317666  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:25.354392  133241 cri.go:89] found id: ""
	I1210 01:10:25.354418  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.354430  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:25.354441  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:25.354510  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:25.392349  133241 cri.go:89] found id: ""
	I1210 01:10:25.392375  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.392384  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:25.392390  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:25.392442  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:25.429665  133241 cri.go:89] found id: ""
	I1210 01:10:25.429692  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.429702  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:25.429709  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:25.429766  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:25.466437  133241 cri.go:89] found id: ""
	I1210 01:10:25.466463  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.466476  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:25.466488  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:25.466503  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:25.480846  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:25.480885  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:25.548828  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:25.548861  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:25.548877  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:25.626942  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:25.626985  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:25.664081  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:25.664120  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:28.219032  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:28.233820  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:28.233886  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:28.267033  133241 cri.go:89] found id: ""
	I1210 01:10:28.267061  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.267072  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:28.267079  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:28.267133  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:28.304241  133241 cri.go:89] found id: ""
	I1210 01:10:28.304268  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.304276  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:28.304282  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:28.304329  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:28.339783  133241 cri.go:89] found id: ""
	I1210 01:10:28.339810  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.339817  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:28.339824  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:28.339897  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:28.371890  133241 cri.go:89] found id: ""
	I1210 01:10:28.371944  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.371957  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:28.371965  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:28.372033  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:28.409995  133241 cri.go:89] found id: ""
	I1210 01:10:28.410031  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.410042  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:28.410050  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:28.410122  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:28.443817  133241 cri.go:89] found id: ""
	I1210 01:10:28.443854  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.443866  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:28.443874  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:28.443943  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:28.476813  133241 cri.go:89] found id: ""
	I1210 01:10:28.476842  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.476850  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:28.476856  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:28.476918  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:28.509092  133241 cri.go:89] found id: ""
	I1210 01:10:28.509119  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.509129  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:28.509147  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:28.509166  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:28.582990  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:28.583021  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:28.624120  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:28.624152  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:28.673901  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:28.673942  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:28.686654  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:28.686684  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:28.754914  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:28.790925  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:31.291799  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:27.321458  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:29.820474  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:31.820865  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:28.257390  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:30.757194  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:31.256019  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:31.269297  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:31.269374  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:31.306032  133241 cri.go:89] found id: ""
	I1210 01:10:31.306063  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.306074  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:31.306082  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:31.306149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:31.339930  133241 cri.go:89] found id: ""
	I1210 01:10:31.339964  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.339976  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:31.339984  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:31.340049  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:31.371820  133241 cri.go:89] found id: ""
	I1210 01:10:31.371853  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.371865  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:31.371872  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:31.371929  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:31.406853  133241 cri.go:89] found id: ""
	I1210 01:10:31.406880  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.406888  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:31.406895  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:31.406973  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:31.441927  133241 cri.go:89] found id: ""
	I1210 01:10:31.441964  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.441983  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:31.441993  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:31.442059  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:31.475302  133241 cri.go:89] found id: ""
	I1210 01:10:31.475335  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.475347  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:31.475356  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:31.475422  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:31.508445  133241 cri.go:89] found id: ""
	I1210 01:10:31.508479  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.508489  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:31.508495  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:31.508549  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:31.542658  133241 cri.go:89] found id: ""
	I1210 01:10:31.542686  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.542694  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:31.542704  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:31.542720  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:31.591393  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:31.591432  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:31.604124  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:31.604152  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:31.670342  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:31.670381  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:31.670401  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:31.755216  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:31.755273  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:34.307218  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:34.321878  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:34.321951  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:34.355191  133241 cri.go:89] found id: ""
	I1210 01:10:34.355230  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.355238  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:34.355244  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:34.355300  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:34.392397  133241 cri.go:89] found id: ""
	I1210 01:10:34.392432  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.392445  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:34.392453  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:34.392522  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:34.424468  133241 cri.go:89] found id: ""
	I1210 01:10:34.424496  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.424513  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:34.424519  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:34.424568  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:34.456966  133241 cri.go:89] found id: ""
	I1210 01:10:34.456990  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.457000  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:34.457006  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:34.457057  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:34.491830  133241 cri.go:89] found id: ""
	I1210 01:10:34.491863  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.491874  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:34.491882  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:34.491949  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:34.523409  133241 cri.go:89] found id: ""
	I1210 01:10:34.523441  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.523455  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:34.523464  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:34.523520  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:34.555092  133241 cri.go:89] found id: ""
	I1210 01:10:34.555125  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.555136  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:34.555143  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:34.555211  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:34.585491  133241 cri.go:89] found id: ""
	I1210 01:10:34.585521  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.585530  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:34.585540  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:34.585553  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:34.598250  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:34.598281  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:10:33.790899  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:35.791148  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:34.321870  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:36.821430  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:32.757323  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:35.256735  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:37.257310  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	W1210 01:10:34.662759  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:34.662784  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:34.662797  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:34.740495  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:34.740537  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:34.777192  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:34.777231  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:37.329212  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:37.342322  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:37.342397  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:37.374083  133241 cri.go:89] found id: ""
	I1210 01:10:37.374114  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.374124  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:37.374133  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:37.374202  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:37.404838  133241 cri.go:89] found id: ""
	I1210 01:10:37.404872  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.404880  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:37.404886  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:37.404948  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:37.439471  133241 cri.go:89] found id: ""
	I1210 01:10:37.439503  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.439515  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:37.439523  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:37.439598  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:37.473725  133241 cri.go:89] found id: ""
	I1210 01:10:37.473756  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.473765  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:37.473770  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:37.473822  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:37.507449  133241 cri.go:89] found id: ""
	I1210 01:10:37.507478  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.507491  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:37.507498  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:37.507565  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:37.538432  133241 cri.go:89] found id: ""
	I1210 01:10:37.538468  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.538479  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:37.538490  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:37.538583  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:37.571690  133241 cri.go:89] found id: ""
	I1210 01:10:37.571716  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.571724  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:37.571730  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:37.571787  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:37.606988  133241 cri.go:89] found id: ""
	I1210 01:10:37.607017  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.607026  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:37.607036  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:37.607048  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:37.655260  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:37.655290  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:37.667647  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:37.667672  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:37.734898  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:37.734955  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:37.734971  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:37.823654  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:37.823690  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:37.792020  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:40.290220  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:39.323412  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:41.822486  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:39.759358  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:42.256854  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:40.361513  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:40.374995  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:40.375054  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:40.407043  133241 cri.go:89] found id: ""
	I1210 01:10:40.407077  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.407086  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:40.407091  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:40.407146  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:40.438613  133241 cri.go:89] found id: ""
	I1210 01:10:40.438644  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.438655  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:40.438663  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:40.438725  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:40.468747  133241 cri.go:89] found id: ""
	I1210 01:10:40.468781  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.468794  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:40.468801  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:40.468873  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:40.501670  133241 cri.go:89] found id: ""
	I1210 01:10:40.501700  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.501708  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:40.501714  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:40.501762  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:40.531671  133241 cri.go:89] found id: ""
	I1210 01:10:40.531694  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.531704  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:40.531712  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:40.531769  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:40.562804  133241 cri.go:89] found id: ""
	I1210 01:10:40.562827  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.562836  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:40.562847  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:40.562909  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:40.593286  133241 cri.go:89] found id: ""
	I1210 01:10:40.593309  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.593318  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:40.593323  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:40.593369  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:40.624387  133241 cri.go:89] found id: ""
	I1210 01:10:40.624424  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.624438  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:40.624452  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:40.624479  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:40.636616  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:40.636643  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:40.703044  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:40.703071  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:40.703089  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:40.782186  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:40.782220  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:40.824410  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:40.824434  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:43.377460  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:43.391624  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:43.391704  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:43.424454  133241 cri.go:89] found id: ""
	I1210 01:10:43.424489  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.424499  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:43.424505  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:43.424570  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:43.454067  133241 cri.go:89] found id: ""
	I1210 01:10:43.454094  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.454102  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:43.454108  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:43.454160  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:43.485905  133241 cri.go:89] found id: ""
	I1210 01:10:43.485938  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.485949  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:43.485956  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:43.486021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:43.516402  133241 cri.go:89] found id: ""
	I1210 01:10:43.516427  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.516435  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:43.516447  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:43.516521  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:43.549049  133241 cri.go:89] found id: ""
	I1210 01:10:43.549102  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.549114  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:43.549124  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:43.549181  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:43.582610  133241 cri.go:89] found id: ""
	I1210 01:10:43.582641  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.582652  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:43.582661  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:43.582720  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:43.614392  133241 cri.go:89] found id: ""
	I1210 01:10:43.614424  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.614435  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:43.614442  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:43.614507  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:43.646797  133241 cri.go:89] found id: ""
	I1210 01:10:43.646830  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.646842  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:43.646855  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:43.646872  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:43.682884  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:43.682921  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:43.739117  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:43.739159  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:43.754008  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:43.754047  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:43.825110  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:43.825140  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:43.825156  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:42.290697  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:44.790711  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.791942  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:44.321563  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.821954  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:44.756178  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.757399  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.401040  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:46.414417  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:46.414515  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:46.446832  133241 cri.go:89] found id: ""
	I1210 01:10:46.446861  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.446871  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:46.446879  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:46.446945  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:46.480534  133241 cri.go:89] found id: ""
	I1210 01:10:46.480566  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.480577  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:46.480584  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:46.480649  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:46.512706  133241 cri.go:89] found id: ""
	I1210 01:10:46.512735  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.512745  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:46.512752  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:46.512818  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:46.545769  133241 cri.go:89] found id: ""
	I1210 01:10:46.545803  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.545815  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:46.545823  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:46.545889  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:46.575715  133241 cri.go:89] found id: ""
	I1210 01:10:46.575750  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.575762  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:46.575769  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:46.575834  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:46.605133  133241 cri.go:89] found id: ""
	I1210 01:10:46.605164  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.605175  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:46.605183  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:46.605235  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:46.635536  133241 cri.go:89] found id: ""
	I1210 01:10:46.635571  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.635582  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:46.635589  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:46.635650  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:46.665579  133241 cri.go:89] found id: ""
	I1210 01:10:46.665608  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.665617  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:46.665627  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:46.665637  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:46.749766  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:46.749806  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:46.788690  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:46.788725  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:46.841860  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:46.841888  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:46.870621  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:46.870651  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:46.943532  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:49.444707  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:49.457003  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:49.457071  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:49.489757  133241 cri.go:89] found id: ""
	I1210 01:10:49.489791  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.489802  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:49.489809  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:49.489859  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:49.519808  133241 cri.go:89] found id: ""
	I1210 01:10:49.519832  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.519839  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:49.519844  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:49.519895  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:49.552725  133241 cri.go:89] found id: ""
	I1210 01:10:49.552748  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.552756  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:49.552762  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:49.552816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:49.583657  133241 cri.go:89] found id: ""
	I1210 01:10:49.583686  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.583699  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:49.583710  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:49.583771  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:49.614520  133241 cri.go:89] found id: ""
	I1210 01:10:49.614547  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.614569  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:49.614579  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:49.614644  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:49.290385  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:51.291504  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:49.321277  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:51.321612  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:49.256723  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:51.257348  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:49.646739  133241 cri.go:89] found id: ""
	I1210 01:10:49.646788  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.646800  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:49.646811  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:49.646871  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:49.680156  133241 cri.go:89] found id: ""
	I1210 01:10:49.680184  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.680195  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:49.680203  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:49.680271  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:49.711052  133241 cri.go:89] found id: ""
	I1210 01:10:49.711090  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.711103  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:49.711115  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:49.711133  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:49.765139  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:49.765173  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:49.777581  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:49.777612  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:49.842857  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:49.842882  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:49.842897  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:49.923492  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:49.923529  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:52.465282  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:52.478468  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:52.478535  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:52.514379  133241 cri.go:89] found id: ""
	I1210 01:10:52.514411  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.514420  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:52.514426  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:52.514481  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:52.545952  133241 cri.go:89] found id: ""
	I1210 01:10:52.545981  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.545991  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:52.545999  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:52.546063  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:52.581959  133241 cri.go:89] found id: ""
	I1210 01:10:52.581986  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.581995  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:52.582003  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:52.582109  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:52.634648  133241 cri.go:89] found id: ""
	I1210 01:10:52.634674  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.634686  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:52.634693  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:52.634753  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:52.668485  133241 cri.go:89] found id: ""
	I1210 01:10:52.668509  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.668518  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:52.668524  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:52.668587  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:52.702030  133241 cri.go:89] found id: ""
	I1210 01:10:52.702058  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.702067  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:52.702074  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:52.702139  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:52.736618  133241 cri.go:89] found id: ""
	I1210 01:10:52.736647  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.736655  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:52.736661  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:52.736728  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:52.769400  133241 cri.go:89] found id: ""
	I1210 01:10:52.769427  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.769436  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:52.769444  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:52.769462  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:52.808900  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:52.808936  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:52.861032  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:52.861067  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:52.874251  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:52.874281  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:52.946117  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:52.946145  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:52.946174  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:53.790452  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:55.791486  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:53.820716  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:55.822118  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:53.756664  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:56.255828  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:55.526812  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:55.541146  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:55.541232  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:55.582382  133241 cri.go:89] found id: ""
	I1210 01:10:55.582414  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.582424  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:55.582430  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:55.582483  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:55.620756  133241 cri.go:89] found id: ""
	I1210 01:10:55.620781  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.620790  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:55.620795  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:55.620865  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:55.657136  133241 cri.go:89] found id: ""
	I1210 01:10:55.657173  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.657184  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:55.657192  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:55.657253  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:55.691809  133241 cri.go:89] found id: ""
	I1210 01:10:55.691836  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.691844  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:55.691850  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:55.691901  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:55.725747  133241 cri.go:89] found id: ""
	I1210 01:10:55.725782  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.725794  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:55.725802  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:55.725870  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:55.758656  133241 cri.go:89] found id: ""
	I1210 01:10:55.758686  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.758697  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:55.758704  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:55.758766  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:55.791407  133241 cri.go:89] found id: ""
	I1210 01:10:55.791437  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.791447  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:55.791453  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:55.791522  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:55.823238  133241 cri.go:89] found id: ""
	I1210 01:10:55.823259  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.823269  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:55.823277  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:55.823288  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:55.858051  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:55.858090  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:55.910896  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:55.910928  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:55.923792  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:55.923814  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:55.994264  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:55.994283  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:55.994297  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:58.570410  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:58.582632  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:58.582709  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:58.614706  133241 cri.go:89] found id: ""
	I1210 01:10:58.614741  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.614752  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:58.614759  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:58.614820  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:58.645853  133241 cri.go:89] found id: ""
	I1210 01:10:58.645880  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.645888  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:58.645893  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:58.645946  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:58.681278  133241 cri.go:89] found id: ""
	I1210 01:10:58.681305  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.681313  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:58.681319  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:58.681376  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:58.715312  133241 cri.go:89] found id: ""
	I1210 01:10:58.715344  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.715356  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:58.715364  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:58.715434  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:58.753150  133241 cri.go:89] found id: ""
	I1210 01:10:58.753182  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.753193  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:58.753201  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:58.753275  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:58.792337  133241 cri.go:89] found id: ""
	I1210 01:10:58.792363  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.792371  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:58.792377  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:58.792424  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:58.824538  133241 cri.go:89] found id: ""
	I1210 01:10:58.824562  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.824569  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:58.824575  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:58.824626  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:58.859699  133241 cri.go:89] found id: ""
	I1210 01:10:58.859733  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.859745  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:58.859755  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:58.859768  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:58.874557  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:58.874607  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:58.942377  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:58.942399  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:58.942413  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:59.020700  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:59.020743  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:59.092780  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:59.092820  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:58.290069  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:00.290277  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:58.321783  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:00.820779  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:58.256816  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:00.756307  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:01.656942  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:01.670706  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:01.670790  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:01.704182  133241 cri.go:89] found id: ""
	I1210 01:11:01.704222  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.704235  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:01.704242  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:01.704295  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:01.737176  133241 cri.go:89] found id: ""
	I1210 01:11:01.737207  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.737216  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:01.737222  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:01.737279  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:01.771891  133241 cri.go:89] found id: ""
	I1210 01:11:01.771924  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.771935  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:01.771943  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:01.772001  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:01.804964  133241 cri.go:89] found id: ""
	I1210 01:11:01.804994  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.805005  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:01.805026  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:01.805101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:01.837156  133241 cri.go:89] found id: ""
	I1210 01:11:01.837184  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.837195  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:01.837203  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:01.837260  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:01.866759  133241 cri.go:89] found id: ""
	I1210 01:11:01.866783  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.866793  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:01.866802  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:01.866868  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:01.897349  133241 cri.go:89] found id: ""
	I1210 01:11:01.897377  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.897387  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:01.897394  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:01.897452  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:01.928390  133241 cri.go:89] found id: ""
	I1210 01:11:01.928419  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.928430  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:01.928442  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:01.928462  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:01.995531  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:01.995558  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:01.995572  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:02.073144  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:02.073178  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:02.107235  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:02.107266  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:02.159959  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:02.159993  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:02.789938  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:04.790544  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:02.821058  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:04.822126  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:02.756968  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:05.255943  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:07.256779  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:04.672775  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:04.686495  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:04.686604  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:04.720867  133241 cri.go:89] found id: ""
	I1210 01:11:04.720977  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.721005  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:04.721034  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:04.721143  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:04.757796  133241 cri.go:89] found id: ""
	I1210 01:11:04.757823  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.757831  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:04.757837  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:04.757896  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:04.799823  133241 cri.go:89] found id: ""
	I1210 01:11:04.799848  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.799856  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:04.799861  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:04.799921  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:04.848259  133241 cri.go:89] found id: ""
	I1210 01:11:04.848291  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.848303  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:04.848312  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:04.848392  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:04.898530  133241 cri.go:89] found id: ""
	I1210 01:11:04.898583  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.898596  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:04.898605  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:04.898673  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:04.935954  133241 cri.go:89] found id: ""
	I1210 01:11:04.935979  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.935987  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:04.935992  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:04.936037  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:04.970503  133241 cri.go:89] found id: ""
	I1210 01:11:04.970531  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.970538  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:04.970544  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:04.970627  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:05.003257  133241 cri.go:89] found id: ""
	I1210 01:11:05.003280  133241 logs.go:282] 0 containers: []
	W1210 01:11:05.003289  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:05.003298  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:05.003311  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:05.053816  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:05.053849  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:05.066024  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:05.066056  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:05.129515  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:05.129542  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:05.129559  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:05.203823  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:05.203861  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:07.743773  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:07.756948  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:07.757021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:07.790298  133241 cri.go:89] found id: ""
	I1210 01:11:07.790326  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.790334  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:07.790341  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:07.790432  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:07.822653  133241 cri.go:89] found id: ""
	I1210 01:11:07.822682  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.822693  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:07.822700  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:07.822754  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:07.856125  133241 cri.go:89] found id: ""
	I1210 01:11:07.856160  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.856171  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:07.856178  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:07.856247  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:07.888297  133241 cri.go:89] found id: ""
	I1210 01:11:07.888321  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.888329  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:07.888336  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:07.888394  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:07.919131  133241 cri.go:89] found id: ""
	I1210 01:11:07.919159  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.919170  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:07.919177  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:07.919245  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:07.954289  133241 cri.go:89] found id: ""
	I1210 01:11:07.954320  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.954332  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:07.954340  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:07.954396  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:07.985447  133241 cri.go:89] found id: ""
	I1210 01:11:07.985482  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.985497  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:07.985505  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:07.985560  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:08.016461  133241 cri.go:89] found id: ""
	I1210 01:11:08.016491  133241 logs.go:282] 0 containers: []
	W1210 01:11:08.016504  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:08.016516  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:08.016534  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:08.051346  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:08.051386  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:08.101708  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:08.101741  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:08.113883  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:08.113912  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:08.174656  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:08.174681  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:08.174696  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:07.289462  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:09.290707  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:11.790555  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:07.322137  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:09.821004  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:11.821064  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:09.757877  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:12.256156  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:10.751754  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:10.768007  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:10.768071  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:10.814141  133241 cri.go:89] found id: ""
	I1210 01:11:10.814167  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.814177  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:10.814187  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:10.814255  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:10.864355  133241 cri.go:89] found id: ""
	I1210 01:11:10.864379  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.864387  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:10.864392  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:10.864464  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:10.917533  133241 cri.go:89] found id: ""
	I1210 01:11:10.917563  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.917572  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:10.917579  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:10.917644  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:10.949555  133241 cri.go:89] found id: ""
	I1210 01:11:10.949589  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.949601  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:10.949609  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:10.949668  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:10.982997  133241 cri.go:89] found id: ""
	I1210 01:11:10.983022  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.983030  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:10.983036  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:10.983101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:11.016318  133241 cri.go:89] found id: ""
	I1210 01:11:11.016348  133241 logs.go:282] 0 containers: []
	W1210 01:11:11.016359  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:11.016366  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:11.016460  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:11.045980  133241 cri.go:89] found id: ""
	I1210 01:11:11.046004  133241 logs.go:282] 0 containers: []
	W1210 01:11:11.046012  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:11.046018  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:11.046067  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:11.074303  133241 cri.go:89] found id: ""
	I1210 01:11:11.074329  133241 logs.go:282] 0 containers: []
	W1210 01:11:11.074336  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:11.074346  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:11.074357  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:11.108874  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:11.108907  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:11.156642  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:11.156672  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:11.168505  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:11.168527  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:11.239949  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:11.239976  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:11.239994  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:13.828538  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:13.841876  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:13.841929  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:13.872854  133241 cri.go:89] found id: ""
	I1210 01:11:13.872884  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.872896  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:13.872904  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:13.872955  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:13.903759  133241 cri.go:89] found id: ""
	I1210 01:11:13.903790  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.903803  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:13.903812  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:13.903877  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:13.938898  133241 cri.go:89] found id: ""
	I1210 01:11:13.938921  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.938929  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:13.938934  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:13.938992  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:13.979322  133241 cri.go:89] found id: ""
	I1210 01:11:13.979343  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.979351  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:13.979358  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:13.979419  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:14.012959  133241 cri.go:89] found id: ""
	I1210 01:11:14.012984  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.012993  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:14.012999  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:14.013048  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:14.050248  133241 cri.go:89] found id: ""
	I1210 01:11:14.050274  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.050282  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:14.050288  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:14.050337  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:14.086029  133241 cri.go:89] found id: ""
	I1210 01:11:14.086061  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.086072  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:14.086080  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:14.086149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:14.119966  133241 cri.go:89] found id: ""
	I1210 01:11:14.119994  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.120002  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:14.120012  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:14.120025  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:14.133378  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:14.133406  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:14.199060  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:14.199093  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:14.199108  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:14.282056  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:14.282089  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:14.321155  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:14.321182  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:13.790898  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.290292  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:13.821872  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.320917  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:14.257094  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.755448  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.871040  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:16.882350  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:16.882417  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:16.911877  133241 cri.go:89] found id: ""
	I1210 01:11:16.911910  133241 logs.go:282] 0 containers: []
	W1210 01:11:16.911922  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:16.911930  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:16.911993  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:16.946898  133241 cri.go:89] found id: ""
	I1210 01:11:16.946931  133241 logs.go:282] 0 containers: []
	W1210 01:11:16.946945  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:16.946952  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:16.947021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:16.979154  133241 cri.go:89] found id: ""
	I1210 01:11:16.979185  133241 logs.go:282] 0 containers: []
	W1210 01:11:16.979196  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:16.979209  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:16.979293  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:17.008977  133241 cri.go:89] found id: ""
	I1210 01:11:17.009010  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.009021  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:17.009028  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:17.009093  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:17.041399  133241 cri.go:89] found id: ""
	I1210 01:11:17.041431  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.041440  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:17.041446  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:17.041505  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:17.074254  133241 cri.go:89] found id: ""
	I1210 01:11:17.074284  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.074295  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:17.074305  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:17.074385  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:17.104982  133241 cri.go:89] found id: ""
	I1210 01:11:17.105015  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.105025  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:17.105033  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:17.105094  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:17.135240  133241 cri.go:89] found id: ""
	I1210 01:11:17.135265  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.135275  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:17.135286  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:17.135298  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:17.186952  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:17.187004  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:17.201444  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:17.201472  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:17.272210  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:17.272229  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:17.272245  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:17.355218  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:17.355256  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:18.290407  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:20.292289  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:18.321390  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:20.321550  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:18.756823  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:21.256882  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:19.892863  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:19.905069  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:19.905138  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:19.943515  133241 cri.go:89] found id: ""
	I1210 01:11:19.943544  133241 logs.go:282] 0 containers: []
	W1210 01:11:19.943557  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:19.943566  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:19.943629  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:19.974474  133241 cri.go:89] found id: ""
	I1210 01:11:19.974499  133241 logs.go:282] 0 containers: []
	W1210 01:11:19.974509  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:19.974517  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:19.974597  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:20.008980  133241 cri.go:89] found id: ""
	I1210 01:11:20.009011  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.009023  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:20.009030  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:20.009097  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:20.040655  133241 cri.go:89] found id: ""
	I1210 01:11:20.040681  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.040690  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:20.040696  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:20.040745  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:20.073761  133241 cri.go:89] found id: ""
	I1210 01:11:20.073788  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.073799  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:20.073806  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:20.073873  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:20.104381  133241 cri.go:89] found id: ""
	I1210 01:11:20.104410  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.104421  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:20.104429  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:20.104489  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:20.138130  133241 cri.go:89] found id: ""
	I1210 01:11:20.138158  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.138167  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:20.138173  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:20.138229  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:20.166883  133241 cri.go:89] found id: ""
	I1210 01:11:20.166908  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.166916  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:20.166926  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:20.166940  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:20.199437  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:20.199470  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:20.247384  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:20.247418  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:20.260363  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:20.260392  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:20.330260  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:20.330283  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:20.330299  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:22.912818  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:22.925241  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:22.925316  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:22.957975  133241 cri.go:89] found id: ""
	I1210 01:11:22.958003  133241 logs.go:282] 0 containers: []
	W1210 01:11:22.958015  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:22.958023  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:22.958087  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:22.991067  133241 cri.go:89] found id: ""
	I1210 01:11:22.991098  133241 logs.go:282] 0 containers: []
	W1210 01:11:22.991109  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:22.991117  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:22.991177  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:23.022191  133241 cri.go:89] found id: ""
	I1210 01:11:23.022280  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.022297  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:23.022307  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:23.022373  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:23.055399  133241 cri.go:89] found id: ""
	I1210 01:11:23.055427  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.055435  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:23.055440  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:23.055504  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:23.085084  133241 cri.go:89] found id: ""
	I1210 01:11:23.085114  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.085126  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:23.085133  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:23.085195  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:23.114896  133241 cri.go:89] found id: ""
	I1210 01:11:23.114921  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.114929  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:23.114935  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:23.114995  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:23.146419  133241 cri.go:89] found id: ""
	I1210 01:11:23.146450  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.146463  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:23.146470  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:23.146546  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:23.178747  133241 cri.go:89] found id: ""
	I1210 01:11:23.178774  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.178782  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:23.178792  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:23.178804  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:23.230574  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:23.230609  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:23.242622  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:23.242649  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:23.315830  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:23.315850  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:23.315862  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:23.394054  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:23.394091  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:22.790004  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:24.790395  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:26.790583  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:22.821008  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:25.321294  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:23.758460  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:26.257243  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:25.930799  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:25.943287  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:25.943351  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:25.975836  133241 cri.go:89] found id: ""
	I1210 01:11:25.975866  133241 logs.go:282] 0 containers: []
	W1210 01:11:25.975877  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:25.975884  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:25.975948  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:26.008518  133241 cri.go:89] found id: ""
	I1210 01:11:26.008545  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.008553  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:26.008560  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:26.008607  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:26.041953  133241 cri.go:89] found id: ""
	I1210 01:11:26.041992  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.042002  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:26.042009  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:26.042076  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:26.071782  133241 cri.go:89] found id: ""
	I1210 01:11:26.071809  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.071821  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:26.071829  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:26.071894  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:26.101051  133241 cri.go:89] found id: ""
	I1210 01:11:26.101075  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.101084  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:26.101089  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:26.101135  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:26.135274  133241 cri.go:89] found id: ""
	I1210 01:11:26.135300  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.135308  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:26.135315  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:26.135368  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:26.168190  133241 cri.go:89] found id: ""
	I1210 01:11:26.168216  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.168224  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:26.168230  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:26.168293  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:26.198453  133241 cri.go:89] found id: ""
	I1210 01:11:26.198482  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.198492  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:26.198505  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:26.198524  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:26.211436  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:26.211460  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:26.273940  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:26.273964  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:26.273980  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:26.353198  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:26.353232  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:26.389823  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:26.389857  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:28.940375  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:28.952619  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:28.952676  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:28.984886  133241 cri.go:89] found id: ""
	I1210 01:11:28.984914  133241 logs.go:282] 0 containers: []
	W1210 01:11:28.984923  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:28.984929  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:28.984978  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:29.015424  133241 cri.go:89] found id: ""
	I1210 01:11:29.015453  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.015463  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:29.015469  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:29.015520  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:29.045941  133241 cri.go:89] found id: ""
	I1210 01:11:29.045977  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.045989  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:29.045997  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:29.046065  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:29.077346  133241 cri.go:89] found id: ""
	I1210 01:11:29.077375  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.077384  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:29.077389  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:29.077442  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:29.109825  133241 cri.go:89] found id: ""
	I1210 01:11:29.109861  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.109873  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:29.109880  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:29.109946  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:29.141601  133241 cri.go:89] found id: ""
	I1210 01:11:29.141633  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.141645  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:29.141656  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:29.141720  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:29.172711  133241 cri.go:89] found id: ""
	I1210 01:11:29.172747  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.172758  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:29.172766  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:29.172830  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:29.205247  133241 cri.go:89] found id: ""
	I1210 01:11:29.205272  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.205283  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:29.205296  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:29.205310  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:29.255917  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:29.255954  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:29.269246  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:29.269276  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:29.339509  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:29.339535  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:29.339550  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:29.414320  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:29.414358  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:29.291191  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:31.790102  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:27.820810  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:30.321256  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:28.756034  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:30.757633  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:31.950667  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:31.963020  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:31.963083  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:31.994537  133241 cri.go:89] found id: ""
	I1210 01:11:31.994586  133241 logs.go:282] 0 containers: []
	W1210 01:11:31.994598  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:31.994606  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:31.994672  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:32.028601  133241 cri.go:89] found id: ""
	I1210 01:11:32.028632  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.028643  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:32.028651  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:32.028710  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:32.060238  133241 cri.go:89] found id: ""
	I1210 01:11:32.060265  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.060273  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:32.060280  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:32.060344  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:32.094421  133241 cri.go:89] found id: ""
	I1210 01:11:32.094446  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.094454  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:32.094460  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:32.094509  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:32.128237  133241 cri.go:89] found id: ""
	I1210 01:11:32.128266  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.128277  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:32.128285  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:32.128355  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:32.163139  133241 cri.go:89] found id: ""
	I1210 01:11:32.163163  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.163172  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:32.163179  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:32.163237  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:32.194077  133241 cri.go:89] found id: ""
	I1210 01:11:32.194108  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.194119  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:32.194126  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:32.194187  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:32.224914  133241 cri.go:89] found id: ""
	I1210 01:11:32.224941  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.224952  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:32.224964  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:32.224980  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:32.275194  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:32.275230  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:32.287642  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:32.287670  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:32.350922  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:32.350953  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:32.350971  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:32.431573  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:32.431610  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:33.790816  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:35.791330  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:32.321475  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:34.823056  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:33.256524  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:35.755851  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:34.969741  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:34.982487  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:34.982541  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:35.015370  133241 cri.go:89] found id: ""
	I1210 01:11:35.015408  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.015419  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:35.015428  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:35.015494  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:35.047381  133241 cri.go:89] found id: ""
	I1210 01:11:35.047418  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.047430  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:35.047437  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:35.047501  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:35.077282  133241 cri.go:89] found id: ""
	I1210 01:11:35.077305  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.077314  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:35.077320  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:35.077380  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:35.107625  133241 cri.go:89] found id: ""
	I1210 01:11:35.107653  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.107664  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:35.107671  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:35.107723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:35.137919  133241 cri.go:89] found id: ""
	I1210 01:11:35.137949  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.137962  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:35.137970  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:35.138037  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:35.170914  133241 cri.go:89] found id: ""
	I1210 01:11:35.170939  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.170947  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:35.170962  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:35.171021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:35.201719  133241 cri.go:89] found id: ""
	I1210 01:11:35.201747  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.201755  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:35.201761  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:35.201821  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:35.230544  133241 cri.go:89] found id: ""
	I1210 01:11:35.230582  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.230595  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:35.230607  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:35.230622  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:35.243184  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:35.243210  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:35.311888  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:35.311915  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:35.311931  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:35.387377  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:35.387411  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:35.424087  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:35.424121  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:37.977530  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:37.989741  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:37.989811  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:38.023765  133241 cri.go:89] found id: ""
	I1210 01:11:38.023789  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.023799  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:38.023808  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:38.023871  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:38.060456  133241 cri.go:89] found id: ""
	I1210 01:11:38.060487  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.060498  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:38.060505  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:38.060558  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:38.092589  133241 cri.go:89] found id: ""
	I1210 01:11:38.092612  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.092620  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:38.092626  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:38.092676  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:38.126075  133241 cri.go:89] found id: ""
	I1210 01:11:38.126115  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.126127  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:38.126137  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:38.126216  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:38.158861  133241 cri.go:89] found id: ""
	I1210 01:11:38.158892  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.158905  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:38.158911  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:38.158966  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:38.189136  133241 cri.go:89] found id: ""
	I1210 01:11:38.189164  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.189172  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:38.189178  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:38.189227  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:38.220497  133241 cri.go:89] found id: ""
	I1210 01:11:38.220522  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.220530  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:38.220536  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:38.220585  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:38.253480  133241 cri.go:89] found id: ""
	I1210 01:11:38.253515  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.253527  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:38.253539  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:38.253554  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:38.334967  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:38.335006  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:38.375521  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:38.375551  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:38.429375  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:38.429419  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:38.442488  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:38.442527  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:38.504243  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:38.290594  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:40.290705  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:37.322067  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:39.822004  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:37.756517  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:40.256112  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:42.256624  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:41.005015  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:41.018073  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:41.018149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:41.049377  133241 cri.go:89] found id: ""
	I1210 01:11:41.049409  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.049421  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:41.049429  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:41.049495  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:41.080430  133241 cri.go:89] found id: ""
	I1210 01:11:41.080466  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.080476  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:41.080482  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:41.080543  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:41.113179  133241 cri.go:89] found id: ""
	I1210 01:11:41.113210  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.113222  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:41.113229  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:41.113298  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:41.144493  133241 cri.go:89] found id: ""
	I1210 01:11:41.144523  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.144535  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:41.144545  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:41.144612  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:41.174786  133241 cri.go:89] found id: ""
	I1210 01:11:41.174818  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.174828  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:41.174835  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:41.174903  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:41.205010  133241 cri.go:89] found id: ""
	I1210 01:11:41.205050  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.205063  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:41.205072  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:41.205142  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:41.236095  133241 cri.go:89] found id: ""
	I1210 01:11:41.236120  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.236131  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:41.236138  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:41.236200  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:41.267610  133241 cri.go:89] found id: ""
	I1210 01:11:41.267639  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.267654  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:41.267665  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:41.267681  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:41.302639  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:41.302669  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:41.352311  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:41.352343  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:41.365111  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:41.365140  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:41.434174  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:41.434197  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:41.434214  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:44.018219  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:44.030886  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:44.030961  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:44.072932  133241 cri.go:89] found id: ""
	I1210 01:11:44.072954  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.072962  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:44.072968  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:44.073015  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:44.110425  133241 cri.go:89] found id: ""
	I1210 01:11:44.110456  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.110466  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:44.110473  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:44.110539  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:44.148811  133241 cri.go:89] found id: ""
	I1210 01:11:44.148837  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.148848  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:44.148855  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:44.148922  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:44.184181  133241 cri.go:89] found id: ""
	I1210 01:11:44.184205  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.184213  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:44.184219  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:44.184268  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:44.213545  133241 cri.go:89] found id: ""
	I1210 01:11:44.213578  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.213590  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:44.213597  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:44.213658  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:44.246979  133241 cri.go:89] found id: ""
	I1210 01:11:44.247012  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.247024  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:44.247032  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:44.247095  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:44.280902  133241 cri.go:89] found id: ""
	I1210 01:11:44.280939  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.280950  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:44.280958  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:44.281035  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:44.310824  133241 cri.go:89] found id: ""
	I1210 01:11:44.310848  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.310859  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:44.310870  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:44.310887  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:44.389324  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:44.389354  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:44.425351  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:44.425388  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:44.478151  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:44.478197  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:44.491139  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:44.491171  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:44.552150  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:42.790792  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:45.289730  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:42.321108  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:44.321367  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:46.820868  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:44.258348  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:46.756838  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:47.052917  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:47.065698  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:47.065764  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:47.098483  133241 cri.go:89] found id: ""
	I1210 01:11:47.098518  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.098530  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:47.098538  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:47.098617  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:47.129042  133241 cri.go:89] found id: ""
	I1210 01:11:47.129073  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.129082  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:47.129088  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:47.129157  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:47.160050  133241 cri.go:89] found id: ""
	I1210 01:11:47.160083  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.160094  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:47.160101  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:47.160167  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:47.190078  133241 cri.go:89] found id: ""
	I1210 01:11:47.190111  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.190120  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:47.190126  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:47.190180  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:47.218975  133241 cri.go:89] found id: ""
	I1210 01:11:47.219007  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.219020  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:47.219028  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:47.219088  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:47.248644  133241 cri.go:89] found id: ""
	I1210 01:11:47.248679  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.248689  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:47.248694  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:47.248743  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:47.284306  133241 cri.go:89] found id: ""
	I1210 01:11:47.284332  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.284339  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:47.284345  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:47.284394  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:47.314682  133241 cri.go:89] found id: ""
	I1210 01:11:47.314704  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.314712  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:47.314721  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:47.314733  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:47.365334  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:47.365364  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:47.378184  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:47.378215  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:47.445591  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:47.445619  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:47.445642  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:47.523176  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:47.523214  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:47.291212  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:49.790326  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:51.790425  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:48.821947  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:51.321998  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:49.255902  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:51.256638  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:50.059060  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:50.071413  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:50.071489  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:50.104600  133241 cri.go:89] found id: ""
	I1210 01:11:50.104632  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.104644  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:50.104652  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:50.104715  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:50.136915  133241 cri.go:89] found id: ""
	I1210 01:11:50.136947  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.136957  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:50.136968  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:50.137038  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:50.172552  133241 cri.go:89] found id: ""
	I1210 01:11:50.172582  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.172593  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:50.172604  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:50.172668  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:50.202583  133241 cri.go:89] found id: ""
	I1210 01:11:50.202613  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.202626  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:50.202634  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:50.202696  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:50.232446  133241 cri.go:89] found id: ""
	I1210 01:11:50.232473  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.232483  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:50.232491  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:50.232555  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:50.271296  133241 cri.go:89] found id: ""
	I1210 01:11:50.271321  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.271332  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:50.271340  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:50.271404  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:50.304185  133241 cri.go:89] found id: ""
	I1210 01:11:50.304216  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.304227  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:50.304235  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:50.304298  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:50.338004  133241 cri.go:89] found id: ""
	I1210 01:11:50.338030  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.338041  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:50.338051  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:50.338066  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:50.374374  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:50.374403  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:50.427315  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:50.427346  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:50.439862  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:50.439890  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:50.505410  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:50.505441  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:50.505458  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:53.081065  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:53.093760  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:53.093816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:53.126125  133241 cri.go:89] found id: ""
	I1210 01:11:53.126160  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.126172  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:53.126180  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:53.126252  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:53.157694  133241 cri.go:89] found id: ""
	I1210 01:11:53.157719  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.157727  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:53.157732  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:53.157787  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:53.188784  133241 cri.go:89] found id: ""
	I1210 01:11:53.188812  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.188820  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:53.188826  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:53.188882  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:53.220025  133241 cri.go:89] found id: ""
	I1210 01:11:53.220056  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.220066  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:53.220074  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:53.220133  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:53.254601  133241 cri.go:89] found id: ""
	I1210 01:11:53.254632  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.254641  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:53.254649  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:53.254718  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:53.286858  133241 cri.go:89] found id: ""
	I1210 01:11:53.286896  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.286906  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:53.286917  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:53.286979  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:53.322063  133241 cri.go:89] found id: ""
	I1210 01:11:53.322087  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.322096  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:53.322104  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:53.322175  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:53.353598  133241 cri.go:89] found id: ""
	I1210 01:11:53.353624  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.353632  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:53.353641  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:53.353653  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:53.400634  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:53.400660  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:53.412838  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:53.412870  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:53.475152  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:53.475176  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:53.475191  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:53.551193  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:53.551236  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:54.290077  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:56.290911  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:53.322201  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:55.821982  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:53.257982  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:55.756075  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:56.089703  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:56.102065  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:56.102158  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:56.137385  133241 cri.go:89] found id: ""
	I1210 01:11:56.137410  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.137418  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:56.137424  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:56.137489  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:56.173717  133241 cri.go:89] found id: ""
	I1210 01:11:56.173748  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.173756  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:56.173762  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:56.173823  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:56.209007  133241 cri.go:89] found id: ""
	I1210 01:11:56.209031  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.209038  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:56.209044  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:56.209106  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:56.247599  133241 cri.go:89] found id: ""
	I1210 01:11:56.247628  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.247636  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:56.247642  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:56.247701  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:56.279510  133241 cri.go:89] found id: ""
	I1210 01:11:56.279535  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.279544  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:56.279550  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:56.279600  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:56.311644  133241 cri.go:89] found id: ""
	I1210 01:11:56.311665  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.311672  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:56.311678  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:56.311722  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:56.343277  133241 cri.go:89] found id: ""
	I1210 01:11:56.343306  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.343317  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:56.343324  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:56.343384  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:56.396352  133241 cri.go:89] found id: ""
	I1210 01:11:56.396380  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.396388  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:56.396397  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:56.396409  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:56.408726  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:56.408754  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:56.483943  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:56.483970  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:56.483987  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:56.566841  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:56.566874  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:56.604048  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:56.604083  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:59.154979  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:59.167727  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:59.167803  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:59.198861  133241 cri.go:89] found id: ""
	I1210 01:11:59.198886  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.198894  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:59.198901  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:59.198953  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:59.232900  133241 cri.go:89] found id: ""
	I1210 01:11:59.232935  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.232947  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:59.232955  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:59.233024  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:59.267532  133241 cri.go:89] found id: ""
	I1210 01:11:59.267558  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.267566  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:59.267571  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:59.267633  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:59.298091  133241 cri.go:89] found id: ""
	I1210 01:11:59.298120  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.298130  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:59.298140  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:59.298199  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:59.327848  133241 cri.go:89] found id: ""
	I1210 01:11:59.327879  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.327889  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:59.327897  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:59.327957  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:59.356570  133241 cri.go:89] found id: ""
	I1210 01:11:59.356601  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.356617  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:59.356626  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:59.356686  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:59.387756  133241 cri.go:89] found id: ""
	I1210 01:11:59.387780  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.387788  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:59.387793  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:59.387843  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:59.419836  133241 cri.go:89] found id: ""
	I1210 01:11:59.419869  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.419878  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:59.419887  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:59.419902  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:59.469663  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:59.469697  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:59.482738  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:59.482768  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:59.548687  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:59.548717  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:59.548739  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:58.790282  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:01.290379  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:58.320794  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:00.821991  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:57.756197  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:00.256511  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:59.625772  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:59.625809  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:02.163527  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:02.175510  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:02.175569  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:02.209432  133241 cri.go:89] found id: ""
	I1210 01:12:02.209462  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.209474  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:02.209481  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:02.209535  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:02.241027  133241 cri.go:89] found id: ""
	I1210 01:12:02.241050  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.241059  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:02.241064  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:02.241113  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:02.272251  133241 cri.go:89] found id: ""
	I1210 01:12:02.272277  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.272286  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:02.272293  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:02.272355  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:02.305879  133241 cri.go:89] found id: ""
	I1210 01:12:02.305903  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.305913  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:02.305920  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:02.305978  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:02.339219  133241 cri.go:89] found id: ""
	I1210 01:12:02.339248  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.339263  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:02.339271  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:02.339333  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:02.375203  133241 cri.go:89] found id: ""
	I1210 01:12:02.375240  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.375252  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:02.375260  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:02.375326  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:02.406364  133241 cri.go:89] found id: ""
	I1210 01:12:02.406396  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.406406  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:02.406413  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:02.406472  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:02.441572  133241 cri.go:89] found id: ""
	I1210 01:12:02.441602  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.441614  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:02.441627  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:02.441642  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:02.454215  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:02.454241  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:02.526345  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:02.526368  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:02.526388  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:02.603813  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:02.603845  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:02.640102  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:02.640136  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:03.291135  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.792322  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:03.321084  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.322066  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:02.755961  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.256774  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.189319  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:05.201957  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:05.202022  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:05.242211  133241 cri.go:89] found id: ""
	I1210 01:12:05.242238  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.242247  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:05.242253  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:05.242300  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:05.277287  133241 cri.go:89] found id: ""
	I1210 01:12:05.277309  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.277317  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:05.277323  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:05.277382  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:05.309455  133241 cri.go:89] found id: ""
	I1210 01:12:05.309480  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.309488  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:05.309493  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:05.309540  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:05.344117  133241 cri.go:89] found id: ""
	I1210 01:12:05.344143  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.344156  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:05.344164  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:05.344222  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:05.375039  133241 cri.go:89] found id: ""
	I1210 01:12:05.375067  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.375079  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:05.375086  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:05.375146  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:05.407623  133241 cri.go:89] found id: ""
	I1210 01:12:05.407649  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.407657  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:05.407665  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:05.407723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:05.441018  133241 cri.go:89] found id: ""
	I1210 01:12:05.441047  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.441055  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:05.441061  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:05.441123  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:05.471864  133241 cri.go:89] found id: ""
	I1210 01:12:05.471895  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.471907  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:05.471918  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:05.471931  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:05.536855  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:05.536881  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:05.536896  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:05.617577  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:05.617612  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:05.654150  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:05.654188  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:05.707690  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:05.707720  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:08.220391  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:08.232904  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:08.232961  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:08.271892  133241 cri.go:89] found id: ""
	I1210 01:12:08.271921  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.271933  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:08.271939  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:08.272004  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:08.304534  133241 cri.go:89] found id: ""
	I1210 01:12:08.304556  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.304563  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:08.304569  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:08.304620  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:08.338410  133241 cri.go:89] found id: ""
	I1210 01:12:08.338441  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.338451  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:08.338459  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:08.338523  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:08.370412  133241 cri.go:89] found id: ""
	I1210 01:12:08.370438  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.370449  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:08.370456  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:08.370515  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:08.401137  133241 cri.go:89] found id: ""
	I1210 01:12:08.401161  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.401169  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:08.401175  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:08.401224  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:08.436185  133241 cri.go:89] found id: ""
	I1210 01:12:08.436220  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.436232  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:08.436241  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:08.436308  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:08.468648  133241 cri.go:89] found id: ""
	I1210 01:12:08.468677  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.468696  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:08.468704  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:08.468764  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:08.506817  133241 cri.go:89] found id: ""
	I1210 01:12:08.506852  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.506865  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:08.506878  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:08.506898  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:08.565209  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:08.565240  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:08.581630  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:08.581675  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:08.663163  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:08.663189  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:08.663201  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:08.744843  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:08.744888  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:08.290806  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:10.790414  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:07.821280  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:09.821710  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:07.755386  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:09.759064  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:12.256087  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:11.282449  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:11.295381  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:11.295443  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:11.328119  133241 cri.go:89] found id: ""
	I1210 01:12:11.328145  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.328156  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:11.328162  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:11.328215  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:11.360864  133241 cri.go:89] found id: ""
	I1210 01:12:11.360895  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.360906  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:11.360914  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:11.360979  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:11.394838  133241 cri.go:89] found id: ""
	I1210 01:12:11.394862  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.394871  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:11.394876  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:11.394928  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:11.424174  133241 cri.go:89] found id: ""
	I1210 01:12:11.424216  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.424228  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:11.424236  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:11.424295  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:11.455057  133241 cri.go:89] found id: ""
	I1210 01:12:11.455083  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.455095  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:11.455102  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:11.455173  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:11.485755  133241 cri.go:89] found id: ""
	I1210 01:12:11.485783  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.485791  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:11.485797  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:11.485850  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:11.516921  133241 cri.go:89] found id: ""
	I1210 01:12:11.516952  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.516963  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:11.516970  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:11.517029  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:11.547484  133241 cri.go:89] found id: ""
	I1210 01:12:11.547510  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.547518  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:11.547527  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:11.547540  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:11.582392  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:11.582419  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:11.635271  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:11.635306  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:11.647460  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:11.647492  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:11.713562  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:11.713584  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:11.713599  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:14.299112  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:14.314813  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:14.314886  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:14.365870  133241 cri.go:89] found id: ""
	I1210 01:12:14.365907  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.365925  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:14.365934  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:14.365998  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:14.399023  133241 cri.go:89] found id: ""
	I1210 01:12:14.399046  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.399054  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:14.399060  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:14.399106  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:14.432464  133241 cri.go:89] found id: ""
	I1210 01:12:14.432490  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.432498  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:14.432504  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:14.432559  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:14.462625  133241 cri.go:89] found id: ""
	I1210 01:12:14.462657  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.462668  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:14.462675  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:14.462723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:14.494853  133241 cri.go:89] found id: ""
	I1210 01:12:14.494884  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.494895  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:14.494903  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:14.494968  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:14.528863  133241 cri.go:89] found id: ""
	I1210 01:12:14.528898  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.528909  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:14.528917  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:14.528985  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:14.563527  133241 cri.go:89] found id: ""
	I1210 01:12:14.563557  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.563568  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:14.563575  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:14.563633  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:14.592383  133241 cri.go:89] found id: ""
	I1210 01:12:14.592419  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.592429  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:14.592440  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:14.592453  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:14.604471  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:14.604498  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:12:12.790681  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:15.289761  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:12.321375  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:14.321765  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:16.820568  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:14.256568  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:16.755323  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	W1210 01:12:14.671647  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:14.671673  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:14.671686  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:14.749619  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:14.749648  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:14.783668  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:14.783700  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:17.337203  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:17.349666  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:17.349726  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:17.380558  133241 cri.go:89] found id: ""
	I1210 01:12:17.380584  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.380595  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:17.380603  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:17.380663  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:17.413026  133241 cri.go:89] found id: ""
	I1210 01:12:17.413060  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.413072  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:17.413080  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:17.413149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:17.444972  133241 cri.go:89] found id: ""
	I1210 01:12:17.445003  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.445014  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:17.445022  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:17.445081  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:17.477555  133241 cri.go:89] found id: ""
	I1210 01:12:17.477580  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.477588  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:17.477594  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:17.477641  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:17.508550  133241 cri.go:89] found id: ""
	I1210 01:12:17.508574  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.508582  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:17.508588  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:17.508671  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:17.538537  133241 cri.go:89] found id: ""
	I1210 01:12:17.538574  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.538586  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:17.538594  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:17.538655  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:17.571816  133241 cri.go:89] found id: ""
	I1210 01:12:17.571843  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.571851  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:17.571859  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:17.571916  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:17.602437  133241 cri.go:89] found id: ""
	I1210 01:12:17.602465  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.602484  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:17.602502  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:17.602517  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:17.652904  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:17.652936  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:17.664983  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:17.665006  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:17.732580  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:17.732606  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:17.732622  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:17.813561  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:17.813598  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:17.290624  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:19.291031  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:21.790058  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:18.821021  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:20.821538  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:18.755611  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:20.756570  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:20.349846  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:20.361680  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:20.361816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:20.394316  133241 cri.go:89] found id: ""
	I1210 01:12:20.394338  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.394345  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:20.394350  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:20.394395  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:20.432172  133241 cri.go:89] found id: ""
	I1210 01:12:20.432196  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.432204  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:20.432209  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:20.432256  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:20.464019  133241 cri.go:89] found id: ""
	I1210 01:12:20.464042  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.464049  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:20.464055  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:20.464101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:20.496239  133241 cri.go:89] found id: ""
	I1210 01:12:20.496264  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.496271  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:20.496277  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:20.496325  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:20.527890  133241 cri.go:89] found id: ""
	I1210 01:12:20.527920  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.527932  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:20.527939  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:20.527996  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:20.558333  133241 cri.go:89] found id: ""
	I1210 01:12:20.558360  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.558368  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:20.558374  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:20.558425  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:20.589431  133241 cri.go:89] found id: ""
	I1210 01:12:20.589461  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.589472  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:20.589480  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:20.589542  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:20.618988  133241 cri.go:89] found id: ""
	I1210 01:12:20.619018  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.619032  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:20.619042  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:20.619056  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:20.669620  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:20.669648  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:20.681405  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:20.681428  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:20.745196  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:20.745226  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:20.745243  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:20.823522  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:20.823548  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:23.360499  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:23.373249  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:23.373315  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:23.405186  133241 cri.go:89] found id: ""
	I1210 01:12:23.405207  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.405215  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:23.405224  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:23.405269  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:23.440082  133241 cri.go:89] found id: ""
	I1210 01:12:23.440118  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.440138  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:23.440146  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:23.440217  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:23.473962  133241 cri.go:89] found id: ""
	I1210 01:12:23.473991  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.474001  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:23.474010  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:23.474066  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:23.505004  133241 cri.go:89] found id: ""
	I1210 01:12:23.505028  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.505036  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:23.505042  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:23.505087  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:23.539383  133241 cri.go:89] found id: ""
	I1210 01:12:23.539416  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.539427  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:23.539435  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:23.539502  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:23.569371  133241 cri.go:89] found id: ""
	I1210 01:12:23.569402  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.569412  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:23.569420  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:23.569487  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:23.599718  133241 cri.go:89] found id: ""
	I1210 01:12:23.599740  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.599748  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:23.599754  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:23.599798  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:23.633483  133241 cri.go:89] found id: ""
	I1210 01:12:23.633513  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.633527  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:23.633538  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:23.633572  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:23.645791  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:23.645814  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:23.706819  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:23.706842  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:23.706858  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:23.792257  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:23.792283  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:23.832356  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:23.832384  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:23.790991  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:26.289467  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:23.321221  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:25.321373  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:23.256427  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:25.256459  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:27.257652  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:26.383157  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:26.395778  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:26.395834  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:26.428709  133241 cri.go:89] found id: ""
	I1210 01:12:26.428738  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.428750  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:26.428758  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:26.428823  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:26.463421  133241 cri.go:89] found id: ""
	I1210 01:12:26.463451  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.463470  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:26.463479  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:26.463541  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:26.494783  133241 cri.go:89] found id: ""
	I1210 01:12:26.494813  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.494826  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:26.494834  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:26.494894  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:26.524395  133241 cri.go:89] found id: ""
	I1210 01:12:26.524423  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.524434  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:26.524442  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:26.524505  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:26.554102  133241 cri.go:89] found id: ""
	I1210 01:12:26.554135  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.554146  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:26.554153  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:26.554218  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:26.584091  133241 cri.go:89] found id: ""
	I1210 01:12:26.584119  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.584127  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:26.584133  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:26.584188  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:26.618194  133241 cri.go:89] found id: ""
	I1210 01:12:26.618221  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.618229  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:26.618234  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:26.618282  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:26.652597  133241 cri.go:89] found id: ""
	I1210 01:12:26.652632  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.652643  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:26.652657  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:26.652674  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:26.724236  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:26.724262  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:26.724277  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:26.802706  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:26.802745  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:26.851153  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:26.851184  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:26.902459  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:26.902489  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:29.415298  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:29.428093  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:29.428168  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:29.460651  133241 cri.go:89] found id: ""
	I1210 01:12:29.460678  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.460686  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:29.460692  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:29.460745  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:29.490971  133241 cri.go:89] found id: ""
	I1210 01:12:29.491000  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.491009  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:29.491015  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:29.491064  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:29.521465  133241 cri.go:89] found id: ""
	I1210 01:12:29.521496  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.521509  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:29.521517  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:29.521592  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:29.555709  133241 cri.go:89] found id: ""
	I1210 01:12:29.555736  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.555744  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:29.555750  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:29.555812  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:29.589891  133241 cri.go:89] found id: ""
	I1210 01:12:29.589918  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.589928  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:29.589935  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:29.590006  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:29.620929  133241 cri.go:89] found id: ""
	I1210 01:12:29.620959  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.620989  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:29.620998  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:29.621060  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:28.290708  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:30.290750  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:27.822436  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:30.320877  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:29.756698  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:31.756872  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:29.652297  133241 cri.go:89] found id: ""
	I1210 01:12:29.652322  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.652332  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:29.652339  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:29.652400  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:29.685881  133241 cri.go:89] found id: ""
	I1210 01:12:29.685904  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.685912  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:29.685922  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:29.685936  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:29.734856  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:29.734889  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:29.747270  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:29.747297  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:29.811253  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:29.811276  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:29.811292  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:29.888151  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:29.888187  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:32.425743  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:32.438647  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:32.438723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:32.477466  133241 cri.go:89] found id: ""
	I1210 01:12:32.477489  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.477498  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:32.477503  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:32.477553  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:32.509698  133241 cri.go:89] found id: ""
	I1210 01:12:32.509732  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.509746  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:32.509753  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:32.509811  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:32.540873  133241 cri.go:89] found id: ""
	I1210 01:12:32.540903  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.540911  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:32.540919  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:32.540981  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:32.571143  133241 cri.go:89] found id: ""
	I1210 01:12:32.571168  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.571179  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:32.571186  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:32.571253  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:32.604797  133241 cri.go:89] found id: ""
	I1210 01:12:32.604829  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.604839  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:32.604847  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:32.604902  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:32.640179  133241 cri.go:89] found id: ""
	I1210 01:12:32.640204  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.640212  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:32.640218  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:32.640265  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:32.671103  133241 cri.go:89] found id: ""
	I1210 01:12:32.671130  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.671138  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:32.671144  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:32.671195  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:32.709038  133241 cri.go:89] found id: ""
	I1210 01:12:32.709069  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.709080  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:32.709092  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:32.709107  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:32.764933  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:32.764963  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:32.777149  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:32.777172  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:32.842233  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:32.842256  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:32.842273  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:32.923533  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:32.923569  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:32.291302  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:34.790708  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:32.321782  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:34.821161  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:36.821244  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:34.256937  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:36.756894  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:35.462284  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:35.476392  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:35.476465  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:35.509483  133241 cri.go:89] found id: ""
	I1210 01:12:35.509507  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.509515  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:35.509521  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:35.509568  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:35.546324  133241 cri.go:89] found id: ""
	I1210 01:12:35.546357  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.546369  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:35.546385  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:35.546457  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:35.580578  133241 cri.go:89] found id: ""
	I1210 01:12:35.580608  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.580618  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:35.580626  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:35.580695  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:35.613220  133241 cri.go:89] found id: ""
	I1210 01:12:35.613245  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.613253  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:35.613259  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:35.613318  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:35.650713  133241 cri.go:89] found id: ""
	I1210 01:12:35.650741  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.650751  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:35.650757  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:35.650826  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:35.685084  133241 cri.go:89] found id: ""
	I1210 01:12:35.685121  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.685134  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:35.685141  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:35.685196  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:35.717092  133241 cri.go:89] found id: ""
	I1210 01:12:35.717118  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.717130  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:35.717141  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:35.717197  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:35.753691  133241 cri.go:89] found id: ""
	I1210 01:12:35.753722  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.753732  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:35.753751  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:35.753766  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:35.807280  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:35.807314  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:35.821862  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:35.821894  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:35.892640  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:35.892667  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:35.892684  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:35.967250  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:35.967291  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:38.505643  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:38.518703  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:38.518762  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:38.554866  133241 cri.go:89] found id: ""
	I1210 01:12:38.554904  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.554917  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:38.554926  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:38.554983  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:38.586725  133241 cri.go:89] found id: ""
	I1210 01:12:38.586757  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.586770  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:38.586779  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:38.586840  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:38.617766  133241 cri.go:89] found id: ""
	I1210 01:12:38.617791  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.617799  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:38.617804  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:38.617855  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:38.647743  133241 cri.go:89] found id: ""
	I1210 01:12:38.647770  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.647779  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:38.647785  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:38.647838  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:38.680523  133241 cri.go:89] found id: ""
	I1210 01:12:38.680553  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.680564  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:38.680572  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:38.680634  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:38.714271  133241 cri.go:89] found id: ""
	I1210 01:12:38.714299  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.714307  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:38.714314  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:38.714366  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:38.751180  133241 cri.go:89] found id: ""
	I1210 01:12:38.751213  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.751226  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:38.751235  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:38.751307  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:38.783754  133241 cri.go:89] found id: ""
	I1210 01:12:38.783778  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.783787  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:38.783796  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:38.783807  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:38.843285  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:38.843332  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:38.856901  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:38.856935  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:38.923720  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:38.923747  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:38.923764  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:39.002855  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:39.002898  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:37.290816  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:38.785325  132693 pod_ready.go:82] duration metric: took 4m0.000828619s for pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace to be "Ready" ...
	E1210 01:12:38.785348  132693 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 01:12:38.785371  132693 pod_ready.go:39] duration metric: took 4m7.530994938s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:12:38.785436  132693 kubeadm.go:597] duration metric: took 4m15.56153133s to restartPrimaryControlPlane
	W1210 01:12:38.785555  132693 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 01:12:38.785612  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:12:38.822192  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:41.321407  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:39.256018  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:41.256892  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:41.542152  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:41.556438  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:41.556517  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:41.587666  133241 cri.go:89] found id: ""
	I1210 01:12:41.587695  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.587706  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:41.587714  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:41.587772  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:41.620472  133241 cri.go:89] found id: ""
	I1210 01:12:41.620498  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.620506  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:41.620512  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:41.620568  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:41.653153  133241 cri.go:89] found id: ""
	I1210 01:12:41.653196  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.653209  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:41.653217  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:41.653275  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:41.685358  133241 cri.go:89] found id: ""
	I1210 01:12:41.685387  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.685395  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:41.685401  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:41.685459  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:41.715972  133241 cri.go:89] found id: ""
	I1210 01:12:41.715996  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.716004  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:41.716010  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:41.716058  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:41.750651  133241 cri.go:89] found id: ""
	I1210 01:12:41.750684  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.750695  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:41.750703  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:41.750781  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:41.788845  133241 cri.go:89] found id: ""
	I1210 01:12:41.788872  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.788882  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:41.788890  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:41.788953  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:41.821679  133241 cri.go:89] found id: ""
	I1210 01:12:41.821705  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.821716  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:41.821726  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:41.821741  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:41.873177  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:41.873207  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:41.885639  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:41.885663  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:41.954882  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:41.954906  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:41.954922  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:42.032868  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:42.032911  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:44.569896  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:44.582137  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:44.582239  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:44.613216  133241 cri.go:89] found id: ""
	I1210 01:12:44.613245  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.613255  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:44.613264  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:44.613326  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:43.820651  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:45.821203  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:43.755681  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:45.755860  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:44.642860  133241 cri.go:89] found id: ""
	I1210 01:12:44.642887  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.642897  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:44.642904  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:44.642961  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:44.675879  133241 cri.go:89] found id: ""
	I1210 01:12:44.675908  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.675920  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:44.675928  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:44.675992  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:44.705466  133241 cri.go:89] found id: ""
	I1210 01:12:44.705490  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.705499  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:44.705505  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:44.705552  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:44.740999  133241 cri.go:89] found id: ""
	I1210 01:12:44.741029  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.741038  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:44.741043  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:44.741101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:44.774933  133241 cri.go:89] found id: ""
	I1210 01:12:44.774963  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.774974  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:44.774981  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:44.775044  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:44.806061  133241 cri.go:89] found id: ""
	I1210 01:12:44.806085  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.806093  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:44.806100  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:44.806163  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:44.837759  133241 cri.go:89] found id: ""
	I1210 01:12:44.837781  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.837789  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:44.837797  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:44.837808  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:44.872830  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:44.872881  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:44.925476  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:44.925505  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:44.937814  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:44.937838  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:45.012002  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:45.012029  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:45.012046  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:47.589735  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:47.603668  133241 kubeadm.go:597] duration metric: took 4m3.306612686s to restartPrimaryControlPlane
	W1210 01:12:47.603739  133241 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 01:12:47.603761  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:12:48.154198  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:12:48.167608  133241 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:12:48.176803  133241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:12:48.185508  133241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:12:48.185527  133241 kubeadm.go:157] found existing configuration files:
	
	I1210 01:12:48.185572  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:12:48.193940  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:12:48.193992  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:12:48.202384  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:12:48.210626  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:12:48.210672  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:12:48.219377  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:12:48.227459  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:12:48.227493  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:12:48.235967  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:12:48.244142  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:12:48.244177  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:12:48.252961  133241 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:12:48.323011  133241 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 01:12:48.323104  133241 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:12:48.458259  133241 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:12:48.458424  133241 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:12:48.458536  133241 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 01:12:48.630626  133241 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:12:48.632393  133241 out.go:235]   - Generating certificates and keys ...
	I1210 01:12:48.632510  133241 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:12:48.632611  133241 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:12:48.633714  133241 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:12:48.633800  133241 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:12:48.633862  133241 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:12:48.633957  133241 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:12:48.634058  133241 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:12:48.634151  133241 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:12:48.634265  133241 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:12:48.634426  133241 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:12:48.634546  133241 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:12:48.634640  133241 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:12:48.756866  133241 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:12:48.885589  133241 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:12:49.551602  133241 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:12:49.667812  133241 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:12:49.683125  133241 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:12:49.684322  133241 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:12:49.684390  133241 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:12:49.830086  133241 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:12:48.322646  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:50.821218  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:47.756532  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:49.757416  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:52.256110  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:49.831618  133241 out.go:235]   - Booting up control plane ...
	I1210 01:12:49.831733  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:12:49.836164  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:12:49.837117  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:12:49.845538  133241 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:12:49.848331  133241 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 01:12:53.320607  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:55.321218  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:54.256922  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:56.755279  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:57.321409  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:59.321826  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:01.821159  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:58.757281  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:01.256065  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:05.297959  132693 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.512320802s)
	I1210 01:13:05.298031  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:13:05.321593  132693 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:13:05.334072  132693 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:13:05.346063  132693 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:13:05.346089  132693 kubeadm.go:157] found existing configuration files:
	
	I1210 01:13:05.346143  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:13:05.360019  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:13:05.360087  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:13:05.372583  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:13:05.384130  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:13:05.384188  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:13:05.392629  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:13:05.400642  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:13:05.400700  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:13:05.410803  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:13:05.419350  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:13:05.419390  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:13:05.429452  132693 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:13:05.481014  132693 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 01:13:05.481092  132693 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:13:05.597528  132693 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:13:05.597654  132693 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:13:05.597756  132693 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 01:13:05.612251  132693 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:13:05.613988  132693 out.go:235]   - Generating certificates and keys ...
	I1210 01:13:05.614052  132693 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:13:05.614111  132693 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:13:05.614207  132693 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:13:05.614297  132693 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:13:05.614409  132693 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:13:05.614477  132693 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:13:05.614568  132693 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:13:05.614645  132693 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:13:05.614739  132693 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:13:05.614860  132693 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:13:05.614923  132693 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:13:05.615007  132693 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:13:05.946241  132693 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:13:06.262996  132693 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 01:13:06.492684  132693 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:13:06.618787  132693 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:13:06.805590  132693 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:13:06.806311  132693 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:13:06.808813  132693 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:13:06.810481  132693 out.go:235]   - Booting up control plane ...
	I1210 01:13:06.810631  132693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:13:06.810746  132693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:13:06.810812  132693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:13:03.821406  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:05.821749  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:03.756325  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:06.257324  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:06.832919  132693 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:13:06.839052  132693 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:13:06.839096  132693 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:13:06.969474  132693 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 01:13:06.969623  132693 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 01:13:07.971413  132693 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001911774s
	I1210 01:13:07.971493  132693 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 01:13:07.822174  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:09.822828  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:12.473566  132693 kubeadm.go:310] [api-check] The API server is healthy after 4.502020736s
	I1210 01:13:12.487877  132693 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 01:13:12.501570  132693 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 01:13:12.529568  132693 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 01:13:12.529808  132693 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-274758 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 01:13:12.539578  132693 kubeadm.go:310] [bootstrap-token] Using token: tq1yzs.mz19z1mkmh869v39
	I1210 01:13:08.757580  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:11.256597  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:12.540687  132693 out.go:235]   - Configuring RBAC rules ...
	I1210 01:13:12.540830  132693 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 01:13:12.546018  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 01:13:12.554335  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 01:13:12.557480  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 01:13:12.562006  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 01:13:12.568058  132693 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 01:13:12.880502  132693 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 01:13:13.367386  132693 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 01:13:13.879413  132693 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 01:13:13.880417  132693 kubeadm.go:310] 
	I1210 01:13:13.880519  132693 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 01:13:13.880541  132693 kubeadm.go:310] 
	I1210 01:13:13.880619  132693 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 01:13:13.880629  132693 kubeadm.go:310] 
	I1210 01:13:13.880662  132693 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 01:13:13.880741  132693 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 01:13:13.880829  132693 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 01:13:13.880851  132693 kubeadm.go:310] 
	I1210 01:13:13.880930  132693 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 01:13:13.880943  132693 kubeadm.go:310] 
	I1210 01:13:13.881016  132693 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 01:13:13.881029  132693 kubeadm.go:310] 
	I1210 01:13:13.881114  132693 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 01:13:13.881255  132693 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 01:13:13.881326  132693 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 01:13:13.881334  132693 kubeadm.go:310] 
	I1210 01:13:13.881429  132693 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 01:13:13.881542  132693 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 01:13:13.881553  132693 kubeadm.go:310] 
	I1210 01:13:13.881680  132693 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tq1yzs.mz19z1mkmh869v39 \
	I1210 01:13:13.881815  132693 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 \
	I1210 01:13:13.881843  132693 kubeadm.go:310] 	--control-plane 
	I1210 01:13:13.881854  132693 kubeadm.go:310] 
	I1210 01:13:13.881973  132693 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 01:13:13.881982  132693 kubeadm.go:310] 
	I1210 01:13:13.882072  132693 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tq1yzs.mz19z1mkmh869v39 \
	I1210 01:13:13.882230  132693 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 
	I1210 01:13:13.883146  132693 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:13:13.883196  132693 cni.go:84] Creating CNI manager for ""
	I1210 01:13:13.883217  132693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:13:13.885371  132693 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:13:13.886543  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:13:13.897482  132693 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:13:13.915107  132693 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 01:13:13.915244  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:13.915242  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-274758 minikube.k8s.io/updated_at=2024_12_10T01_13_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=embed-certs-274758 minikube.k8s.io/primary=true
	I1210 01:13:13.928635  132693 ops.go:34] apiserver oom_adj: -16
	I1210 01:13:14.131983  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:14.633015  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:15.132113  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:15.632347  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:16.132367  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:16.632749  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:12.321479  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:14.321663  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:16.822120  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:13.756549  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:15.751204  133282 pod_ready.go:82] duration metric: took 4m0.000700419s for pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace to be "Ready" ...
	E1210 01:13:15.751234  133282 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 01:13:15.751259  133282 pod_ready.go:39] duration metric: took 4m6.019142998s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:15.751290  133282 kubeadm.go:597] duration metric: took 4m13.842336769s to restartPrimaryControlPlane
	W1210 01:13:15.751381  133282 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 01:13:15.751413  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:13:17.132359  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:17.632050  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:18.132263  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:18.225462  132693 kubeadm.go:1113] duration metric: took 4.310260508s to wait for elevateKubeSystemPrivileges
	I1210 01:13:18.225504  132693 kubeadm.go:394] duration metric: took 4m55.046897812s to StartCluster
	I1210 01:13:18.225547  132693 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:18.225650  132693 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:13:18.227523  132693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:18.227776  132693 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 01:13:18.227852  132693 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 01:13:18.227928  132693 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-274758"
	I1210 01:13:18.227962  132693 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-274758"
	I1210 01:13:18.227961  132693 addons.go:69] Setting default-storageclass=true in profile "embed-certs-274758"
	I1210 01:13:18.227999  132693 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-274758"
	I1210 01:13:18.228012  132693 config.go:182] Loaded profile config "embed-certs-274758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1210 01:13:18.227973  132693 addons.go:243] addon storage-provisioner should already be in state true
	I1210 01:13:18.227983  132693 addons.go:69] Setting metrics-server=true in profile "embed-certs-274758"
	I1210 01:13:18.228079  132693 addons.go:234] Setting addon metrics-server=true in "embed-certs-274758"
	W1210 01:13:18.228096  132693 addons.go:243] addon metrics-server should already be in state true
	I1210 01:13:18.228130  132693 host.go:66] Checking if "embed-certs-274758" exists ...
	I1210 01:13:18.228085  132693 host.go:66] Checking if "embed-certs-274758" exists ...
	I1210 01:13:18.228468  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.228508  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.228521  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.228554  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.228608  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.228660  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.229260  132693 out.go:177] * Verifying Kubernetes components...
	I1210 01:13:18.230643  132693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:13:18.244916  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45913
	I1210 01:13:18.245098  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I1210 01:13:18.245389  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.245571  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.246186  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.246210  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.246288  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43885
	I1210 01:13:18.246344  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.246364  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.246598  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.246769  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.246771  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.246825  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.247215  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.247242  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.247367  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.247418  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.247638  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.248206  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.248244  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.250542  132693 addons.go:234] Setting addon default-storageclass=true in "embed-certs-274758"
	W1210 01:13:18.250579  132693 addons.go:243] addon default-storageclass should already be in state true
	I1210 01:13:18.250614  132693 host.go:66] Checking if "embed-certs-274758" exists ...
	I1210 01:13:18.250951  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.250999  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.265194  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36375
	I1210 01:13:18.265779  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43577
	I1210 01:13:18.266283  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.266478  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.267212  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.267234  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.267302  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40845
	I1210 01:13:18.267316  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.267329  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.267647  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.267700  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.268228  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.268248  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.268250  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.268276  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.268319  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.268679  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.268889  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.269065  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.271273  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:13:18.271495  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:13:18.272879  132693 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:13:18.272898  132693 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 01:13:18.274238  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 01:13:18.274260  132693 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 01:13:18.274279  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:13:18.274371  132693 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:18.274394  132693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 01:13:18.274411  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:13:18.278685  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.279199  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:13:18.279245  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.279405  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:13:18.279557  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:13:18.279684  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:13:18.279823  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:13:18.280345  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.281064  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:13:18.281083  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:13:18.281095  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.281282  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:13:18.281455  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:13:18.281643  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:13:18.285915  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36787
	I1210 01:13:18.286306  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.286727  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.286745  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.287055  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.287234  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.288732  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:13:18.288930  132693 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:18.288945  132693 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 01:13:18.288962  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:13:18.291528  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.291801  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:13:18.291821  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.291990  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:13:18.292175  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:13:18.292303  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:13:18.292532  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:13:18.426704  132693 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:13:18.454857  132693 node_ready.go:35] waiting up to 6m0s for node "embed-certs-274758" to be "Ready" ...
	I1210 01:13:18.470552  132693 node_ready.go:49] node "embed-certs-274758" has status "Ready":"True"
	I1210 01:13:18.470590  132693 node_ready.go:38] duration metric: took 15.702625ms for node "embed-certs-274758" to be "Ready" ...
	I1210 01:13:18.470604  132693 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:18.480748  132693 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bgjgh" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:18.569014  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 01:13:18.569040  132693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 01:13:18.605108  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 01:13:18.605137  132693 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 01:13:18.606158  132693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:18.614827  132693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:18.647542  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:18.647573  132693 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 01:13:18.726060  132693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:19.536876  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.536905  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.536988  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.537020  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.537177  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537215  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.537223  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.537234  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.537239  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537252  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.537261  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.537269  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.537324  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.537524  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537623  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.537922  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.537957  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537981  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.556234  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.556255  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.556555  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.556567  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.556572  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.977786  132693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.251679295s)
	I1210 01:13:19.977848  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.977861  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.978210  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.978227  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.978253  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.978288  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.978297  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.978536  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.978557  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.978581  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.978593  132693 addons.go:475] Verifying addon metrics-server=true in "embed-certs-274758"
	I1210 01:13:19.980096  132693 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 01:13:19.981147  132693 addons.go:510] duration metric: took 1.753302974s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 01:13:20.487221  132693 pod_ready.go:93] pod "coredns-7c65d6cfc9-bgjgh" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:20.487244  132693 pod_ready.go:82] duration metric: took 2.006464893s for pod "coredns-7c65d6cfc9-bgjgh" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:20.487253  132693 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:18.822687  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:21.322845  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:22.493358  132693 pod_ready.go:103] pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:24.993203  132693 pod_ready.go:103] pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:25.492646  132693 pod_ready.go:93] pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.492669  132693 pod_ready.go:82] duration metric: took 5.005410057s for pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.492679  132693 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.497102  132693 pod_ready.go:93] pod "etcd-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.497119  132693 pod_ready.go:82] duration metric: took 4.434391ms for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.497126  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.501166  132693 pod_ready.go:93] pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.501181  132693 pod_ready.go:82] duration metric: took 4.048875ms for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.501189  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.505541  132693 pod_ready.go:93] pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.505565  132693 pod_ready.go:82] duration metric: took 4.369889ms for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.505579  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-v28mz" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.509548  132693 pod_ready.go:93] pod "kube-proxy-v28mz" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.509562  132693 pod_ready.go:82] duration metric: took 3.977138ms for pod "kube-proxy-v28mz" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.509568  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:23.322966  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:25.820854  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:27.517005  132693 pod_ready.go:93] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:27.517027  132693 pod_ready.go:82] duration metric: took 2.007452032s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:27.517035  132693 pod_ready.go:39] duration metric: took 9.046411107s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:27.517052  132693 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:13:27.517101  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:13:27.531721  132693 api_server.go:72] duration metric: took 9.303907779s to wait for apiserver process to appear ...
	I1210 01:13:27.531750  132693 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:13:27.531768  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:13:27.536509  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 200:
	ok
	I1210 01:13:27.537428  132693 api_server.go:141] control plane version: v1.31.2
	I1210 01:13:27.537448  132693 api_server.go:131] duration metric: took 5.691563ms to wait for apiserver health ...
	I1210 01:13:27.537462  132693 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:13:27.693218  132693 system_pods.go:59] 9 kube-system pods found
	I1210 01:13:27.693251  132693 system_pods.go:61] "coredns-7c65d6cfc9-bgjgh" [277d23ef-ff20-414d-beb6-c6982712a423] Running
	I1210 01:13:27.693257  132693 system_pods.go:61] "coredns-7c65d6cfc9-m4qgb" [41253d1b-c010-41e2-9286-e9930025e9ff] Running
	I1210 01:13:27.693265  132693 system_pods.go:61] "etcd-embed-certs-274758" [b0c51f0d-a638-4f4f-b3df-95c7794facc9] Running
	I1210 01:13:27.693269  132693 system_pods.go:61] "kube-apiserver-embed-certs-274758" [ecdc5ff5-4d07-4802-98b8-8382132f7748] Running
	I1210 01:13:27.693273  132693 system_pods.go:61] "kube-controller-manager-embed-certs-274758" [e42fce81-4a93-498b-8897-31387873d181] Running
	I1210 01:13:27.693276  132693 system_pods.go:61] "kube-proxy-v28mz" [5cd47cc1-a085-4e77-850d-dde0c8ed6054] Running
	I1210 01:13:27.693279  132693 system_pods.go:61] "kube-scheduler-embed-certs-274758" [609e1329-cd71-4772-96cc-24b3620c511d] Running
	I1210 01:13:27.693285  132693 system_pods.go:61] "metrics-server-6867b74b74-mcw2c" [a7b75933-124c-4577-b26a-ad1c5c128910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:13:27.693289  132693 system_pods.go:61] "storage-provisioner" [71e4d38f-b0fe-43cf-a844-ba787287fda6] Running
	I1210 01:13:27.693296  132693 system_pods.go:74] duration metric: took 155.828167ms to wait for pod list to return data ...
	I1210 01:13:27.693305  132693 default_sa.go:34] waiting for default service account to be created ...
	I1210 01:13:27.891018  132693 default_sa.go:45] found service account: "default"
	I1210 01:13:27.891046  132693 default_sa.go:55] duration metric: took 197.731166ms for default service account to be created ...
	I1210 01:13:27.891055  132693 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 01:13:28.095967  132693 system_pods.go:86] 9 kube-system pods found
	I1210 01:13:28.095996  132693 system_pods.go:89] "coredns-7c65d6cfc9-bgjgh" [277d23ef-ff20-414d-beb6-c6982712a423] Running
	I1210 01:13:28.096002  132693 system_pods.go:89] "coredns-7c65d6cfc9-m4qgb" [41253d1b-c010-41e2-9286-e9930025e9ff] Running
	I1210 01:13:28.096006  132693 system_pods.go:89] "etcd-embed-certs-274758" [b0c51f0d-a638-4f4f-b3df-95c7794facc9] Running
	I1210 01:13:28.096010  132693 system_pods.go:89] "kube-apiserver-embed-certs-274758" [ecdc5ff5-4d07-4802-98b8-8382132f7748] Running
	I1210 01:13:28.096014  132693 system_pods.go:89] "kube-controller-manager-embed-certs-274758" [e42fce81-4a93-498b-8897-31387873d181] Running
	I1210 01:13:28.096017  132693 system_pods.go:89] "kube-proxy-v28mz" [5cd47cc1-a085-4e77-850d-dde0c8ed6054] Running
	I1210 01:13:28.096021  132693 system_pods.go:89] "kube-scheduler-embed-certs-274758" [609e1329-cd71-4772-96cc-24b3620c511d] Running
	I1210 01:13:28.096027  132693 system_pods.go:89] "metrics-server-6867b74b74-mcw2c" [a7b75933-124c-4577-b26a-ad1c5c128910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:13:28.096031  132693 system_pods.go:89] "storage-provisioner" [71e4d38f-b0fe-43cf-a844-ba787287fda6] Running
	I1210 01:13:28.096039  132693 system_pods.go:126] duration metric: took 204.97831ms to wait for k8s-apps to be running ...
	I1210 01:13:28.096047  132693 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 01:13:28.096091  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:13:28.109766  132693 system_svc.go:56] duration metric: took 13.710817ms WaitForService to wait for kubelet
	I1210 01:13:28.109807  132693 kubeadm.go:582] duration metric: took 9.881998931s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:13:28.109831  132693 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:13:28.290402  132693 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:13:28.290444  132693 node_conditions.go:123] node cpu capacity is 2
	I1210 01:13:28.290457  132693 node_conditions.go:105] duration metric: took 180.620817ms to run NodePressure ...
	I1210 01:13:28.290472  132693 start.go:241] waiting for startup goroutines ...
	I1210 01:13:28.290478  132693 start.go:246] waiting for cluster config update ...
	I1210 01:13:28.290489  132693 start.go:255] writing updated cluster config ...
	I1210 01:13:28.290756  132693 ssh_runner.go:195] Run: rm -f paused
	I1210 01:13:28.341573  132693 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 01:13:28.343695  132693 out.go:177] * Done! kubectl is now configured to use "embed-certs-274758" cluster and "default" namespace by default
	I1210 01:13:28.321957  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:30.821091  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:29.849672  133241 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 01:13:29.850163  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:13:29.850412  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:13:33.322460  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:35.822120  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:34.850843  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:13:34.851064  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:13:38.321590  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:40.322421  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:41.903973  133282 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.152536348s)
	I1210 01:13:41.904058  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:13:41.922104  133282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:13:41.932781  133282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:13:41.949147  133282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:13:41.949169  133282 kubeadm.go:157] found existing configuration files:
	
	I1210 01:13:41.949234  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 01:13:41.961475  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:13:41.961531  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:13:41.973790  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 01:13:41.985658  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:13:41.985718  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:13:41.996851  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 01:13:42.005612  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:13:42.005661  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:13:42.016316  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 01:13:42.025097  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:13:42.025162  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:13:42.035841  133282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:13:42.204343  133282 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:13:42.820637  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:44.821863  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:46.822010  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:44.851525  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:13:44.851699  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:13:50.610797  133282 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 01:13:50.610879  133282 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:13:50.610976  133282 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:13:50.611138  133282 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:13:50.611235  133282 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 01:13:50.611363  133282 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:13:50.612870  133282 out.go:235]   - Generating certificates and keys ...
	I1210 01:13:50.612937  133282 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:13:50.612990  133282 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:13:50.613065  133282 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:13:50.613142  133282 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:13:50.613213  133282 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:13:50.613291  133282 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:13:50.613383  133282 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:13:50.613468  133282 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:13:50.613583  133282 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:13:50.613711  133282 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:13:50.613784  133282 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:13:50.613871  133282 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:13:50.613951  133282 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:13:50.614035  133282 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 01:13:50.614113  133282 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:13:50.614231  133282 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:13:50.614318  133282 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:13:50.614396  133282 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:13:50.614483  133282 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:13:50.615840  133282 out.go:235]   - Booting up control plane ...
	I1210 01:13:50.615917  133282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:13:50.615985  133282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:13:50.616068  133282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:13:50.616186  133282 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:13:50.616283  133282 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:13:50.616354  133282 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:13:50.616529  133282 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 01:13:50.616677  133282 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 01:13:50.616752  133282 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002388771s
	I1210 01:13:50.616858  133282 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 01:13:50.616942  133282 kubeadm.go:310] [api-check] The API server is healthy after 4.501731998s
	I1210 01:13:50.617063  133282 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 01:13:50.617214  133282 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 01:13:50.617302  133282 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 01:13:50.617556  133282 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-901295 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 01:13:50.617633  133282 kubeadm.go:310] [bootstrap-token] Using token: qm0b8q.vohlzpntqihfsj2x
	I1210 01:13:50.618774  133282 out.go:235]   - Configuring RBAC rules ...
	I1210 01:13:50.618896  133282 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 01:13:50.619001  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 01:13:50.619167  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 01:13:50.619286  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 01:13:50.619432  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 01:13:50.619563  133282 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 01:13:50.619724  133282 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 01:13:50.619788  133282 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 01:13:50.619855  133282 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 01:13:50.619865  133282 kubeadm.go:310] 
	I1210 01:13:50.619958  133282 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 01:13:50.619970  133282 kubeadm.go:310] 
	I1210 01:13:50.620071  133282 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 01:13:50.620084  133282 kubeadm.go:310] 
	I1210 01:13:50.620133  133282 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 01:13:50.620214  133282 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 01:13:50.620290  133282 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 01:13:50.620299  133282 kubeadm.go:310] 
	I1210 01:13:50.620384  133282 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 01:13:50.620393  133282 kubeadm.go:310] 
	I1210 01:13:50.620464  133282 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 01:13:50.620480  133282 kubeadm.go:310] 
	I1210 01:13:50.620554  133282 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 01:13:50.620656  133282 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 01:13:50.620747  133282 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 01:13:50.620756  133282 kubeadm.go:310] 
	I1210 01:13:50.620862  133282 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 01:13:50.620978  133282 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 01:13:50.620994  133282 kubeadm.go:310] 
	I1210 01:13:50.621111  133282 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token qm0b8q.vohlzpntqihfsj2x \
	I1210 01:13:50.621255  133282 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 \
	I1210 01:13:50.621286  133282 kubeadm.go:310] 	--control-plane 
	I1210 01:13:50.621296  133282 kubeadm.go:310] 
	I1210 01:13:50.621365  133282 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 01:13:50.621374  133282 kubeadm.go:310] 
	I1210 01:13:50.621448  133282 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token qm0b8q.vohlzpntqihfsj2x \
	I1210 01:13:50.621569  133282 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 
	I1210 01:13:50.621593  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:13:50.621608  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:13:50.622943  133282 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:13:49.321854  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:51.815742  132605 pod_ready.go:82] duration metric: took 4m0.000382174s for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	E1210 01:13:51.815774  132605 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1210 01:13:51.815787  132605 pod_ready.go:39] duration metric: took 4m2.800798949s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:51.815811  132605 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:13:51.815854  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:13:51.815920  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:13:51.865972  132605 cri.go:89] found id: "0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:51.866004  132605 cri.go:89] found id: ""
	I1210 01:13:51.866015  132605 logs.go:282] 1 containers: [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c]
	I1210 01:13:51.866098  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.871589  132605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:13:51.871648  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:13:51.909231  132605 cri.go:89] found id: "bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:51.909256  132605 cri.go:89] found id: ""
	I1210 01:13:51.909266  132605 logs.go:282] 1 containers: [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490]
	I1210 01:13:51.909321  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.913562  132605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:13:51.913639  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:13:51.946623  132605 cri.go:89] found id: "7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:51.946651  132605 cri.go:89] found id: ""
	I1210 01:13:51.946661  132605 logs.go:282] 1 containers: [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04]
	I1210 01:13:51.946721  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.950686  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:13:51.950756  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:13:51.988821  132605 cri.go:89] found id: "c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:51.988845  132605 cri.go:89] found id: ""
	I1210 01:13:51.988856  132605 logs.go:282] 1 containers: [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692]
	I1210 01:13:51.988916  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.992776  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:13:51.992827  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:13:52.028882  132605 cri.go:89] found id: "eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:52.028910  132605 cri.go:89] found id: ""
	I1210 01:13:52.028920  132605 logs.go:282] 1 containers: [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77]
	I1210 01:13:52.028974  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.033384  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:13:52.033467  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:13:52.068002  132605 cri.go:89] found id: "7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:52.068030  132605 cri.go:89] found id: ""
	I1210 01:13:52.068038  132605 logs.go:282] 1 containers: [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f]
	I1210 01:13:52.068086  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.071868  132605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:13:52.071938  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:13:52.105726  132605 cri.go:89] found id: ""
	I1210 01:13:52.105751  132605 logs.go:282] 0 containers: []
	W1210 01:13:52.105760  132605 logs.go:284] No container was found matching "kindnet"
	I1210 01:13:52.105767  132605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 01:13:52.105822  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 01:13:52.146662  132605 cri.go:89] found id: "8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:52.146690  132605 cri.go:89] found id: "abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:52.146696  132605 cri.go:89] found id: ""
	I1210 01:13:52.146706  132605 logs.go:282] 2 containers: [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0]
	I1210 01:13:52.146769  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.150459  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.153921  132605 logs.go:123] Gathering logs for kube-proxy [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77] ...
	I1210 01:13:52.153942  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:52.197327  132605 logs.go:123] Gathering logs for kube-controller-manager [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f] ...
	I1210 01:13:52.197354  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:50.624049  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:13:50.634300  133282 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:13:50.650835  133282 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 01:13:50.650955  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:50.650957  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-901295 minikube.k8s.io/updated_at=2024_12_10T01_13_50_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=default-k8s-diff-port-901295 minikube.k8s.io/primary=true
	I1210 01:13:50.661855  133282 ops.go:34] apiserver oom_adj: -16
	I1210 01:13:50.846244  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:51.347288  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:51.846690  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:52.346721  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:52.846891  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:53.346360  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:53.846284  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:54.346480  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:54.846394  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:54.950848  133282 kubeadm.go:1113] duration metric: took 4.299939675s to wait for elevateKubeSystemPrivileges
	I1210 01:13:54.950893  133282 kubeadm.go:394] duration metric: took 4m53.095365109s to StartCluster
	I1210 01:13:54.950920  133282 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:54.951018  133282 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:13:54.952642  133282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:54.952903  133282 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 01:13:54.953028  133282 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 01:13:54.953103  133282 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-901295"
	I1210 01:13:54.953122  133282 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-901295"
	W1210 01:13:54.953130  133282 addons.go:243] addon storage-provisioner should already be in state true
	I1210 01:13:54.953144  133282 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-901295"
	I1210 01:13:54.953165  133282 host.go:66] Checking if "default-k8s-diff-port-901295" exists ...
	I1210 01:13:54.953165  133282 config.go:182] Loaded profile config "default-k8s-diff-port-901295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:13:54.953164  133282 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-901295"
	I1210 01:13:54.953175  133282 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-901295"
	I1210 01:13:54.953188  133282 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-901295"
	W1210 01:13:54.953197  133282 addons.go:243] addon metrics-server should already be in state true
	I1210 01:13:54.953236  133282 host.go:66] Checking if "default-k8s-diff-port-901295" exists ...
	I1210 01:13:54.953502  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.953544  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.953604  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.953648  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.953611  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.953720  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.954470  133282 out.go:177] * Verifying Kubernetes components...
	I1210 01:13:54.955825  133282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:13:54.969471  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43591
	I1210 01:13:54.969539  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42601
	I1210 01:13:54.969905  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.969971  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.970407  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.970427  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.970539  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.970606  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.970834  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.970902  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.971282  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.971314  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.971457  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.971503  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.971615  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33149
	I1210 01:13:54.971975  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.972424  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.972451  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.972757  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.972939  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:54.976290  133282 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-901295"
	W1210 01:13:54.976313  133282 addons.go:243] addon default-storageclass should already be in state true
	I1210 01:13:54.976344  133282 host.go:66] Checking if "default-k8s-diff-port-901295" exists ...
	I1210 01:13:54.976701  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.976743  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.987931  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35367
	I1210 01:13:54.988409  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.988950  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.988975  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.989395  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.989602  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:54.990179  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35279
	I1210 01:13:54.990660  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.991231  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.991256  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.991553  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:13:54.991804  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.991988  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:54.993375  133282 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 01:13:54.993895  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:13:54.993895  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38115
	I1210 01:13:54.994363  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.994661  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 01:13:54.994675  133282 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 01:13:54.994690  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:13:54.994864  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.994882  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.995298  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.995379  133282 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:13:54.995834  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.995881  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.996682  133282 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:54.996704  133282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 01:13:54.996721  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:13:55.000015  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000319  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:13:55.000343  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000361  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000321  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:13:55.000540  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:13:55.000637  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:13:55.000658  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000689  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:13:55.000819  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:13:55.000955  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:13:55.001529  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:13:55.001896  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:13:55.002167  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:13:55.013310  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45905
	I1210 01:13:55.013700  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:55.014199  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:55.014219  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:55.014556  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:55.014997  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:55.016445  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:13:55.016626  133282 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:55.016642  133282 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 01:13:55.016659  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:13:55.018941  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.019337  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:13:55.019358  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.019578  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:13:55.019718  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:13:55.019807  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:13:55.019887  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:13:55.152197  133282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:13:55.175962  133282 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-901295" to be "Ready" ...
	I1210 01:13:55.185748  133282 node_ready.go:49] node "default-k8s-diff-port-901295" has status "Ready":"True"
	I1210 01:13:55.185767  133282 node_ready.go:38] duration metric: took 9.765238ms for node "default-k8s-diff-port-901295" to be "Ready" ...
	I1210 01:13:55.185776  133282 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:55.193102  133282 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:55.268186  133282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:55.294420  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 01:13:55.294451  133282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 01:13:55.326324  133282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:55.338979  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 01:13:55.339009  133282 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 01:13:55.393682  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:55.393713  133282 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 01:13:55.482637  133282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:56.131482  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.131574  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.131524  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.131650  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.132095  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Closing plugin on server side
	I1210 01:13:56.132112  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132129  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.132133  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132138  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.132140  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.132148  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.132149  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.132207  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.132384  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132397  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.132501  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Closing plugin on server side
	I1210 01:13:56.132565  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132579  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.155188  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.155211  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.155515  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.155535  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.795811  133282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.313113399s)
	I1210 01:13:56.795879  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.795895  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.796326  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Closing plugin on server side
	I1210 01:13:56.796327  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.796353  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.796367  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.796379  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.796612  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.796628  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.796641  133282 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-901295"
	I1210 01:13:56.798189  133282 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 01:13:52.256305  132605 logs.go:123] Gathering logs for dmesg ...
	I1210 01:13:52.256333  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:13:52.269263  132605 logs.go:123] Gathering logs for kube-apiserver [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c] ...
	I1210 01:13:52.269288  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:52.310821  132605 logs.go:123] Gathering logs for coredns [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04] ...
	I1210 01:13:52.310855  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:52.348176  132605 logs.go:123] Gathering logs for kube-scheduler [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692] ...
	I1210 01:13:52.348204  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:52.399357  132605 logs.go:123] Gathering logs for storage-provisioner [abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0] ...
	I1210 01:13:52.399392  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:52.436240  132605 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:13:52.436272  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:13:52.962153  132605 logs.go:123] Gathering logs for container status ...
	I1210 01:13:52.962192  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:13:53.010091  132605 logs.go:123] Gathering logs for kubelet ...
	I1210 01:13:53.010127  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:13:53.082183  132605 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:13:53.082218  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:13:53.201521  132605 logs.go:123] Gathering logs for etcd [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490] ...
	I1210 01:13:53.201557  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:53.243675  132605 logs.go:123] Gathering logs for storage-provisioner [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6] ...
	I1210 01:13:53.243711  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:55.779907  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:13:55.796284  132605 api_server.go:72] duration metric: took 4m14.500959712s to wait for apiserver process to appear ...
	I1210 01:13:55.796314  132605 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:13:55.796358  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:13:55.796431  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:13:55.839067  132605 cri.go:89] found id: "0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:55.839098  132605 cri.go:89] found id: ""
	I1210 01:13:55.839107  132605 logs.go:282] 1 containers: [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c]
	I1210 01:13:55.839175  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.843310  132605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:13:55.843382  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:13:55.875863  132605 cri.go:89] found id: "bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:55.875888  132605 cri.go:89] found id: ""
	I1210 01:13:55.875896  132605 logs.go:282] 1 containers: [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490]
	I1210 01:13:55.875960  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.879748  132605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:13:55.879819  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:13:55.911243  132605 cri.go:89] found id: "7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:55.911269  132605 cri.go:89] found id: ""
	I1210 01:13:55.911279  132605 logs.go:282] 1 containers: [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04]
	I1210 01:13:55.911342  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.915201  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:13:55.915268  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:13:55.966280  132605 cri.go:89] found id: "c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:55.966308  132605 cri.go:89] found id: ""
	I1210 01:13:55.966318  132605 logs.go:282] 1 containers: [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692]
	I1210 01:13:55.966384  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.970278  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:13:55.970354  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:13:56.004675  132605 cri.go:89] found id: "eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:56.004706  132605 cri.go:89] found id: ""
	I1210 01:13:56.004722  132605 logs.go:282] 1 containers: [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77]
	I1210 01:13:56.004785  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.008534  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:13:56.008614  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:13:56.051252  132605 cri.go:89] found id: "7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:56.051282  132605 cri.go:89] found id: ""
	I1210 01:13:56.051293  132605 logs.go:282] 1 containers: [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f]
	I1210 01:13:56.051356  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.055160  132605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:13:56.055243  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:13:56.100629  132605 cri.go:89] found id: ""
	I1210 01:13:56.100660  132605 logs.go:282] 0 containers: []
	W1210 01:13:56.100672  132605 logs.go:284] No container was found matching "kindnet"
	I1210 01:13:56.100681  132605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 01:13:56.100749  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 01:13:56.140250  132605 cri.go:89] found id: "8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:56.140274  132605 cri.go:89] found id: "abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:56.140280  132605 cri.go:89] found id: ""
	I1210 01:13:56.140290  132605 logs.go:282] 2 containers: [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0]
	I1210 01:13:56.140352  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.145225  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.150128  132605 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:13:56.150151  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:13:56.273696  132605 logs.go:123] Gathering logs for kube-apiserver [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c] ...
	I1210 01:13:56.273730  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:56.323851  132605 logs.go:123] Gathering logs for etcd [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490] ...
	I1210 01:13:56.323884  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:56.375726  132605 logs.go:123] Gathering logs for kube-scheduler [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692] ...
	I1210 01:13:56.375763  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:56.430544  132605 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:13:56.430587  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:13:56.866412  132605 logs.go:123] Gathering logs for storage-provisioner [abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0] ...
	I1210 01:13:56.866505  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:56.901321  132605 logs.go:123] Gathering logs for container status ...
	I1210 01:13:56.901360  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:13:56.940068  132605 logs.go:123] Gathering logs for kubelet ...
	I1210 01:13:56.940107  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:13:57.010688  132605 logs.go:123] Gathering logs for dmesg ...
	I1210 01:13:57.010725  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:13:57.025463  132605 logs.go:123] Gathering logs for coredns [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04] ...
	I1210 01:13:57.025514  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:57.063908  132605 logs.go:123] Gathering logs for kube-proxy [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77] ...
	I1210 01:13:57.063939  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:57.102140  132605 logs.go:123] Gathering logs for kube-controller-manager [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f] ...
	I1210 01:13:57.102182  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:57.154429  132605 logs.go:123] Gathering logs for storage-provisioner [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6] ...
	I1210 01:13:57.154467  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:56.799397  133282 addons.go:510] duration metric: took 1.846376359s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 01:13:57.200860  133282 pod_ready.go:103] pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:59.697834  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:13:59.702097  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 200:
	ok
	I1210 01:13:59.703338  132605 api_server.go:141] control plane version: v1.31.2
	I1210 01:13:59.703360  132605 api_server.go:131] duration metric: took 3.907039005s to wait for apiserver health ...
	I1210 01:13:59.703368  132605 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:13:59.703389  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:13:59.703430  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:13:59.746795  132605 cri.go:89] found id: "0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:59.746815  132605 cri.go:89] found id: ""
	I1210 01:13:59.746822  132605 logs.go:282] 1 containers: [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c]
	I1210 01:13:59.746867  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.750673  132605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:13:59.750736  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:13:59.783121  132605 cri.go:89] found id: "bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:59.783154  132605 cri.go:89] found id: ""
	I1210 01:13:59.783163  132605 logs.go:282] 1 containers: [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490]
	I1210 01:13:59.783210  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.786822  132605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:13:59.786875  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:13:59.819075  132605 cri.go:89] found id: "7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:59.819096  132605 cri.go:89] found id: ""
	I1210 01:13:59.819103  132605 logs.go:282] 1 containers: [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04]
	I1210 01:13:59.819163  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.822836  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:13:59.822886  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:13:59.859388  132605 cri.go:89] found id: "c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:59.859418  132605 cri.go:89] found id: ""
	I1210 01:13:59.859428  132605 logs.go:282] 1 containers: [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692]
	I1210 01:13:59.859482  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.863388  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:13:59.863447  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:13:59.897967  132605 cri.go:89] found id: "eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:59.897987  132605 cri.go:89] found id: ""
	I1210 01:13:59.897994  132605 logs.go:282] 1 containers: [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77]
	I1210 01:13:59.898037  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.902198  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:13:59.902262  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:13:59.935685  132605 cri.go:89] found id: "7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:59.935713  132605 cri.go:89] found id: ""
	I1210 01:13:59.935724  132605 logs.go:282] 1 containers: [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f]
	I1210 01:13:59.935782  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.939600  132605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:13:59.939653  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:13:59.975763  132605 cri.go:89] found id: ""
	I1210 01:13:59.975797  132605 logs.go:282] 0 containers: []
	W1210 01:13:59.975810  132605 logs.go:284] No container was found matching "kindnet"
	I1210 01:13:59.975819  132605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 01:13:59.975886  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 01:14:00.014470  132605 cri.go:89] found id: "8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:14:00.014500  132605 cri.go:89] found id: "abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:14:00.014506  132605 cri.go:89] found id: ""
	I1210 01:14:00.014515  132605 logs.go:282] 2 containers: [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0]
	I1210 01:14:00.014589  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:14:00.018470  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:14:00.022628  132605 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:14:00.022650  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:14:00.126253  132605 logs.go:123] Gathering logs for kube-apiserver [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c] ...
	I1210 01:14:00.126280  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:14:00.168377  132605 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:14:00.168410  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:14:00.554305  132605 logs.go:123] Gathering logs for container status ...
	I1210 01:14:00.554349  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:14:00.597646  132605 logs.go:123] Gathering logs for kube-scheduler [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692] ...
	I1210 01:14:00.597673  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:14:00.638356  132605 logs.go:123] Gathering logs for kube-proxy [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77] ...
	I1210 01:14:00.638385  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:14:00.673027  132605 logs.go:123] Gathering logs for kube-controller-manager [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f] ...
	I1210 01:14:00.673058  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:14:00.736632  132605 logs.go:123] Gathering logs for storage-provisioner [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6] ...
	I1210 01:14:00.736667  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:14:00.771609  132605 logs.go:123] Gathering logs for kubelet ...
	I1210 01:14:00.771643  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:14:00.838511  132605 logs.go:123] Gathering logs for dmesg ...
	I1210 01:14:00.838542  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:14:00.853873  132605 logs.go:123] Gathering logs for etcd [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490] ...
	I1210 01:14:00.853901  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:14:00.903386  132605 logs.go:123] Gathering logs for coredns [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04] ...
	I1210 01:14:00.903417  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:14:00.940479  132605 logs.go:123] Gathering logs for storage-provisioner [abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0] ...
	I1210 01:14:00.940538  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:59.199815  133282 pod_ready.go:93] pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:59.199838  133282 pod_ready.go:82] duration metric: took 4.006706604s for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:59.199848  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:01.206809  133282 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:14:02.205417  133282 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:02.205439  133282 pod_ready.go:82] duration metric: took 3.005584799s for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:02.205449  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:03.479747  132605 system_pods.go:59] 8 kube-system pods found
	I1210 01:14:03.479776  132605 system_pods.go:61] "coredns-7c65d6cfc9-hhsm5" [dddb227a-7c16-4acd-be5f-1ab38b78129c] Running
	I1210 01:14:03.479781  132605 system_pods.go:61] "etcd-no-preload-584179" [acba2a13-196f-4ff9-8151-6b88578d532d] Running
	I1210 01:14:03.479785  132605 system_pods.go:61] "kube-apiserver-no-preload-584179" [20a67076-4ff2-4f31-b245-bf4079cd11d1] Running
	I1210 01:14:03.479789  132605 system_pods.go:61] "kube-controller-manager-no-preload-584179" [d7a2e531-1609-4c8a-b756-d9819339ed27] Running
	I1210 01:14:03.479791  132605 system_pods.go:61] "kube-proxy-xcjs2" [ec6cf5b1-3ea9-4868-874d-61e262cca0c5] Running
	I1210 01:14:03.479795  132605 system_pods.go:61] "kube-scheduler-no-preload-584179" [998543da-3056-4960-8762-9ab3dbd1925a] Running
	I1210 01:14:03.479800  132605 system_pods.go:61] "metrics-server-6867b74b74-lwgxd" [0e7f1063-8508-4f5b-b8ff-bbd387a53919] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:03.479804  132605 system_pods.go:61] "storage-provisioner" [31180637-f48e-4dda-8ec3-56155bb300cf] Running
	I1210 01:14:03.479813  132605 system_pods.go:74] duration metric: took 3.776438741s to wait for pod list to return data ...
	I1210 01:14:03.479820  132605 default_sa.go:34] waiting for default service account to be created ...
	I1210 01:14:03.482188  132605 default_sa.go:45] found service account: "default"
	I1210 01:14:03.482210  132605 default_sa.go:55] duration metric: took 2.383945ms for default service account to be created ...
	I1210 01:14:03.482218  132605 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 01:14:03.487172  132605 system_pods.go:86] 8 kube-system pods found
	I1210 01:14:03.487199  132605 system_pods.go:89] "coredns-7c65d6cfc9-hhsm5" [dddb227a-7c16-4acd-be5f-1ab38b78129c] Running
	I1210 01:14:03.487213  132605 system_pods.go:89] "etcd-no-preload-584179" [acba2a13-196f-4ff9-8151-6b88578d532d] Running
	I1210 01:14:03.487220  132605 system_pods.go:89] "kube-apiserver-no-preload-584179" [20a67076-4ff2-4f31-b245-bf4079cd11d1] Running
	I1210 01:14:03.487227  132605 system_pods.go:89] "kube-controller-manager-no-preload-584179" [d7a2e531-1609-4c8a-b756-d9819339ed27] Running
	I1210 01:14:03.487232  132605 system_pods.go:89] "kube-proxy-xcjs2" [ec6cf5b1-3ea9-4868-874d-61e262cca0c5] Running
	I1210 01:14:03.487239  132605 system_pods.go:89] "kube-scheduler-no-preload-584179" [998543da-3056-4960-8762-9ab3dbd1925a] Running
	I1210 01:14:03.487248  132605 system_pods.go:89] "metrics-server-6867b74b74-lwgxd" [0e7f1063-8508-4f5b-b8ff-bbd387a53919] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:03.487257  132605 system_pods.go:89] "storage-provisioner" [31180637-f48e-4dda-8ec3-56155bb300cf] Running
	I1210 01:14:03.487267  132605 system_pods.go:126] duration metric: took 5.043223ms to wait for k8s-apps to be running ...
	I1210 01:14:03.487278  132605 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 01:14:03.487331  132605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:14:03.503494  132605 system_svc.go:56] duration metric: took 16.208072ms WaitForService to wait for kubelet
	I1210 01:14:03.503520  132605 kubeadm.go:582] duration metric: took 4m22.208203921s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:14:03.503535  132605 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:14:03.506148  132605 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:14:03.506168  132605 node_conditions.go:123] node cpu capacity is 2
	I1210 01:14:03.506181  132605 node_conditions.go:105] duration metric: took 2.641093ms to run NodePressure ...
	I1210 01:14:03.506196  132605 start.go:241] waiting for startup goroutines ...
	I1210 01:14:03.506209  132605 start.go:246] waiting for cluster config update ...
	I1210 01:14:03.506228  132605 start.go:255] writing updated cluster config ...
	I1210 01:14:03.506542  132605 ssh_runner.go:195] Run: rm -f paused
	I1210 01:14:03.552082  132605 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 01:14:03.553885  132605 out.go:177] * Done! kubectl is now configured to use "no-preload-584179" cluster and "default" namespace by default
	I1210 01:14:04.212381  133282 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:14:05.212520  133282 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:05.212542  133282 pod_ready.go:82] duration metric: took 3.007086471s for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.212551  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mcrmk" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.218010  133282 pod_ready.go:93] pod "kube-proxy-mcrmk" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:05.218032  133282 pod_ready.go:82] duration metric: took 5.474042ms for pod "kube-proxy-mcrmk" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.218043  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.226656  133282 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:05.226677  133282 pod_ready.go:82] duration metric: took 8.62491ms for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.226685  133282 pod_ready.go:39] duration metric: took 10.040900009s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:14:05.226701  133282 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:14:05.226760  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:14:05.245203  133282 api_server.go:72] duration metric: took 10.292259038s to wait for apiserver process to appear ...
	I1210 01:14:05.245225  133282 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:14:05.245246  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:14:05.249103  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 200:
	ok
	I1210 01:14:05.250169  133282 api_server.go:141] control plane version: v1.31.2
	I1210 01:14:05.250186  133282 api_server.go:131] duration metric: took 4.954164ms to wait for apiserver health ...
	I1210 01:14:05.250191  133282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:14:05.256313  133282 system_pods.go:59] 9 kube-system pods found
	I1210 01:14:05.256338  133282 system_pods.go:61] "coredns-7c65d6cfc9-4snjr" [ee9574b0-7c13-4fd0-b268-47bef0687b7c] Running
	I1210 01:14:05.256343  133282 system_pods.go:61] "coredns-7c65d6cfc9-wr22x" [51e6d58d-7a5a-4739-94de-c53a8c8247ca] Running
	I1210 01:14:05.256347  133282 system_pods.go:61] "etcd-default-k8s-diff-port-901295" [3e38ffc7-a02d-4449-a166-29eca58e8545] Running
	I1210 01:14:05.256351  133282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-901295" [868a8472-8c93-4376-98c9-4d2bada6bfc0] Running
	I1210 01:14:05.256355  133282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-901295" [730047f5-2dbe-4a89-858a-33e2cc0f52eb] Running
	I1210 01:14:05.256358  133282 system_pods.go:61] "kube-proxy-mcrmk" [ffc0f612-5484-46b4-9515-41e0a981287f] Running
	I1210 01:14:05.256361  133282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-901295" [f5ae0336-bb5d-47ef-94f4-bfd4674adc8e] Running
	I1210 01:14:05.256366  133282 system_pods.go:61] "metrics-server-6867b74b74-rlg4g" [9aae955e-136b-4dbb-a5a5-f7490309bf4e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:05.256376  133282 system_pods.go:61] "storage-provisioner" [06a31677-c5d7-4380-80d3-ec80b787f570] Running
	I1210 01:14:05.256383  133282 system_pods.go:74] duration metric: took 6.186387ms to wait for pod list to return data ...
	I1210 01:14:05.256391  133282 default_sa.go:34] waiting for default service account to be created ...
	I1210 01:14:05.258701  133282 default_sa.go:45] found service account: "default"
	I1210 01:14:05.258720  133282 default_sa.go:55] duration metric: took 2.322746ms for default service account to be created ...
	I1210 01:14:05.258726  133282 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 01:14:05.262756  133282 system_pods.go:86] 9 kube-system pods found
	I1210 01:14:05.262776  133282 system_pods.go:89] "coredns-7c65d6cfc9-4snjr" [ee9574b0-7c13-4fd0-b268-47bef0687b7c] Running
	I1210 01:14:05.262781  133282 system_pods.go:89] "coredns-7c65d6cfc9-wr22x" [51e6d58d-7a5a-4739-94de-c53a8c8247ca] Running
	I1210 01:14:05.262785  133282 system_pods.go:89] "etcd-default-k8s-diff-port-901295" [3e38ffc7-a02d-4449-a166-29eca58e8545] Running
	I1210 01:14:05.262791  133282 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-901295" [868a8472-8c93-4376-98c9-4d2bada6bfc0] Running
	I1210 01:14:05.262795  133282 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-901295" [730047f5-2dbe-4a89-858a-33e2cc0f52eb] Running
	I1210 01:14:05.262799  133282 system_pods.go:89] "kube-proxy-mcrmk" [ffc0f612-5484-46b4-9515-41e0a981287f] Running
	I1210 01:14:05.262802  133282 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-901295" [f5ae0336-bb5d-47ef-94f4-bfd4674adc8e] Running
	I1210 01:14:05.262808  133282 system_pods.go:89] "metrics-server-6867b74b74-rlg4g" [9aae955e-136b-4dbb-a5a5-f7490309bf4e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:05.262812  133282 system_pods.go:89] "storage-provisioner" [06a31677-c5d7-4380-80d3-ec80b787f570] Running
	I1210 01:14:05.262821  133282 system_pods.go:126] duration metric: took 4.090244ms to wait for k8s-apps to be running ...
	I1210 01:14:05.262827  133282 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 01:14:05.262881  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:14:05.275937  133282 system_svc.go:56] duration metric: took 13.102664ms WaitForService to wait for kubelet
	I1210 01:14:05.275962  133282 kubeadm.go:582] duration metric: took 10.323025026s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:14:05.275984  133282 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:14:05.278184  133282 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:14:05.278204  133282 node_conditions.go:123] node cpu capacity is 2
	I1210 01:14:05.278217  133282 node_conditions.go:105] duration metric: took 2.226803ms to run NodePressure ...
	I1210 01:14:05.278230  133282 start.go:241] waiting for startup goroutines ...
	I1210 01:14:05.278239  133282 start.go:246] waiting for cluster config update ...
	I1210 01:14:05.278249  133282 start.go:255] writing updated cluster config ...
	I1210 01:14:05.278553  133282 ssh_runner.go:195] Run: rm -f paused
	I1210 01:14:05.326078  133282 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 01:14:05.327902  133282 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-901295" cluster and "default" namespace by default
	I1210 01:14:04.852302  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:14:04.852558  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:14:44.854749  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:14:44.854980  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:14:44.854992  133241 kubeadm.go:310] 
	I1210 01:14:44.855044  133241 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 01:14:44.855104  133241 kubeadm.go:310] 		timed out waiting for the condition
	I1210 01:14:44.855115  133241 kubeadm.go:310] 
	I1210 01:14:44.855162  133241 kubeadm.go:310] 	This error is likely caused by:
	I1210 01:14:44.855217  133241 kubeadm.go:310] 		- The kubelet is not running
	I1210 01:14:44.855363  133241 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 01:14:44.855380  133241 kubeadm.go:310] 
	I1210 01:14:44.855514  133241 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 01:14:44.855565  133241 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 01:14:44.855615  133241 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 01:14:44.855625  133241 kubeadm.go:310] 
	I1210 01:14:44.855796  133241 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 01:14:44.855943  133241 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 01:14:44.855955  133241 kubeadm.go:310] 
	I1210 01:14:44.856139  133241 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 01:14:44.856299  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 01:14:44.856402  133241 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 01:14:44.856500  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 01:14:44.856525  133241 kubeadm.go:310] 
	I1210 01:14:44.856764  133241 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:14:44.856891  133241 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 01:14:44.856987  133241 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1210 01:14:44.857195  133241 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 01:14:44.857249  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:14:45.319104  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:14:45.333243  133241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:14:45.342637  133241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:14:45.342653  133241 kubeadm.go:157] found existing configuration files:
	
	I1210 01:14:45.342696  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:14:45.351179  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:14:45.351227  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:14:45.359836  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:14:45.368986  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:14:45.369041  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:14:45.378166  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:14:45.387734  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:14:45.387781  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:14:45.397866  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:14:45.406757  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:14:45.406794  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:14:45.416506  133241 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:14:45.484342  133241 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 01:14:45.484462  133241 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:14:45.624435  133241 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:14:45.624583  133241 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:14:45.624732  133241 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 01:14:45.800410  133241 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:14:45.802184  133241 out.go:235]   - Generating certificates and keys ...
	I1210 01:14:45.802296  133241 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:14:45.802393  133241 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:14:45.802504  133241 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:14:45.802601  133241 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:14:45.802707  133241 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:14:45.802780  133241 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:14:45.802867  133241 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:14:45.803320  133241 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:14:45.804003  133241 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:14:45.804623  133241 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:14:45.804904  133241 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:14:45.804997  133241 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:14:45.989500  133241 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:14:46.228462  133241 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:14:46.274395  133241 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:14:46.765291  133241 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:14:46.784318  133241 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:14:46.785620  133241 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:14:46.785694  133241 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:14:46.915963  133241 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:14:46.917607  133241 out.go:235]   - Booting up control plane ...
	I1210 01:14:46.917714  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:14:46.924564  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:14:46.925924  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:14:46.926912  133241 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:14:46.929973  133241 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 01:15:26.932207  133241 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 01:15:26.932539  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:15:26.932718  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:15:31.933200  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:15:31.933463  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:15:41.934297  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:15:41.934592  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:16:01.935227  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:16:01.935409  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:16:41.934005  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:16:41.934329  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:16:41.934361  133241 kubeadm.go:310] 
	I1210 01:16:41.934433  133241 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 01:16:41.934492  133241 kubeadm.go:310] 		timed out waiting for the condition
	I1210 01:16:41.934500  133241 kubeadm.go:310] 
	I1210 01:16:41.934550  133241 kubeadm.go:310] 	This error is likely caused by:
	I1210 01:16:41.934610  133241 kubeadm.go:310] 		- The kubelet is not running
	I1210 01:16:41.934768  133241 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 01:16:41.934791  133241 kubeadm.go:310] 
	I1210 01:16:41.934915  133241 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 01:16:41.934971  133241 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 01:16:41.935024  133241 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 01:16:41.935033  133241 kubeadm.go:310] 
	I1210 01:16:41.935184  133241 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 01:16:41.935327  133241 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 01:16:41.935346  133241 kubeadm.go:310] 
	I1210 01:16:41.935485  133241 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 01:16:41.935600  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 01:16:41.935720  133241 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 01:16:41.935818  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 01:16:41.935828  133241 kubeadm.go:310] 
	I1210 01:16:41.936518  133241 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:16:41.936630  133241 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 01:16:41.936756  133241 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1210 01:16:41.936849  133241 kubeadm.go:394] duration metric: took 7m57.690847315s to StartCluster
	I1210 01:16:41.936924  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:16:41.936994  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:16:41.979911  133241 cri.go:89] found id: ""
	I1210 01:16:41.979944  133241 logs.go:282] 0 containers: []
	W1210 01:16:41.979955  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:16:41.979964  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:16:41.980037  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:16:42.018336  133241 cri.go:89] found id: ""
	I1210 01:16:42.018366  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.018378  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:16:42.018385  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:16:42.018461  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:16:42.050036  133241 cri.go:89] found id: ""
	I1210 01:16:42.050065  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.050074  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:16:42.050080  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:16:42.050139  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:16:42.083023  133241 cri.go:89] found id: ""
	I1210 01:16:42.083051  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.083063  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:16:42.083072  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:16:42.083131  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:16:42.117900  133241 cri.go:89] found id: ""
	I1210 01:16:42.117921  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.117930  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:16:42.117936  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:16:42.117982  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:16:42.150009  133241 cri.go:89] found id: ""
	I1210 01:16:42.150041  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.150054  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:16:42.150063  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:16:42.150116  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:16:42.182606  133241 cri.go:89] found id: ""
	I1210 01:16:42.182632  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.182643  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:16:42.182650  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:16:42.182712  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:16:42.223456  133241 cri.go:89] found id: ""
	I1210 01:16:42.223486  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.223496  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:16:42.223507  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:16:42.223522  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:16:42.287081  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:16:42.287118  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:16:42.308277  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:16:42.308315  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:16:42.401928  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:16:42.401960  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:16:42.401977  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:16:42.515786  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:16:42.515829  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 01:16:42.551865  133241 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1210 01:16:42.551924  133241 out.go:270] * 
	W1210 01:16:42.552001  133241 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 01:16:42.552019  133241 out.go:270] * 
	W1210 01:16:42.552906  133241 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 01:16:42.556458  133241 out.go:201] 
	W1210 01:16:42.557556  133241 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 01:16:42.557619  133241 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 01:16:42.557649  133241 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 01:16:42.559020  133241 out.go:201] 
	
	
	==> CRI-O <==
	Dec 10 01:23:07 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:23:07.692240431Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793787692212927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b327221-df0e-4036-bb64-81704bd75e24 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:23:07 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:23:07.693019519Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=79ca874c-4b9f-45df-b716-63bdf746b85e name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:23:07 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:23:07.693081844Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=79ca874c-4b9f-45df-b716-63bdf746b85e name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:23:07 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:23:07.693337361Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52a45c139cf7b76bcdc156fe228ab6016f0aef5ca7255d7bb176f4a6e1e807ed,PodSandboxId:d77aa12393140c588f18d78e635b3238fa16dda524d42fa9d828f5bff7df347a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733793236880136802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06a31677-c5d7-4380-80d3-ec80b787f570,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6372d6d257a7bab18d3d0e3674bc099aeb54a633e485d8c2518a1b0a750266c,PodSandboxId:05bf9be9323da4b23832de3954969460c22ee4c80e104a595aff76fafd9f9ffc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793236464954507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wr22x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51e6d58d-7a5a-4739-94de-c53a8c8247ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a52f58a219a7b7c9a685b2223778247841c8895a0105e7248189c429be1df80,PodSandboxId:1c493db8f92176cd7305e2526fd6832d03a259d38f5491f3ec73ceb7669183a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793236326957951,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4snjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: ee9574b0-7c13-4fd0-b268-47bef0687b7c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c886354b0582999efa023713e31afbf5cc13ad3657fa2eac0228f85b5f645388,PodSandboxId:0aa07862419dfe0021db43b42aae875da5abd84a21bce96caec43b3bf9af9611,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733793235653961851,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mcrmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffc0f612-5484-46b4-9515-41e0a981287f,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f83dfbd84c1e473b83ee7d3618062bfdff726c45070bc6520caf7b97c3a3da,PodSandboxId:2311235619d5dc1fc5480e3b7b860c83e828acb82dcf3e09c115588bf7f425d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:173379322487969794
9,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea8ae8fc71b7d3a1e6a588ea52551088,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33af2665f03e95742ddd00f7788a2150f61bc045cd8ca9eb63be8b85cb41726f,PodSandboxId:5b8234923abb6fcc6ec3f105e60233f9ba0c304c166a3c7cebe4e9f9de14ba49,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:17337932249
18392555,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1601ef4c33ab24cae77f791aa4dae7ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a420a8c6b0391831013fbbe1a0a122bba6ea40169623e539dedd548363dd88,PodSandboxId:67bd39b722fbf512b20d685b7b41277541aedaad6037d72aa99e33f8c1d9a817,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17337
93224877343509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60cb8ec79d285fd901584e8d98fd0fd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d394dc046928540d723c13d1536ec41148a447216cd63353ea6189cc6a73951,PodSandboxId:f046688426640edd97a62aa36b96c10bb4bd5d299f6972fc61da215c858bdfd4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733793224818151233,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4df4d42b9990234c537f747b7c23c1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d2e60f0d4eb9452de688e92d066a121751141264f0919d60bd7af5e3836ae79,PodSandboxId:cabe55eda9171e4354e5bdbfaab8b448971681d8faafe5e81525f08084d9d69d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733792944791162309,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4df4d42b9990234c537f747b7c23c1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=79ca874c-4b9f-45df-b716-63bdf746b85e name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:23:07 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:23:07.743040272Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2309c11f-8372-441d-9dc8-2daec56b1dcb name=/runtime.v1.RuntimeService/Version
	Dec 10 01:23:07 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:23:07.743159113Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2309c11f-8372-441d-9dc8-2daec56b1dcb name=/runtime.v1.RuntimeService/Version
	Dec 10 01:23:07 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:23:07.744991376Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=904b8f7c-dc1e-4d5d-b32b-6926b98b48cd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:23:07 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:23:07.745489514Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793787745459115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=904b8f7c-dc1e-4d5d-b32b-6926b98b48cd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:23:07 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:23:07.746054371Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be8828dc-4057-4f35-a5c5-9cbd43ca8616 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:23:07 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:23:07.746120440Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be8828dc-4057-4f35-a5c5-9cbd43ca8616 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:23:07 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:23:07.746311290Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52a45c139cf7b76bcdc156fe228ab6016f0aef5ca7255d7bb176f4a6e1e807ed,PodSandboxId:d77aa12393140c588f18d78e635b3238fa16dda524d42fa9d828f5bff7df347a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733793236880136802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06a31677-c5d7-4380-80d3-ec80b787f570,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6372d6d257a7bab18d3d0e3674bc099aeb54a633e485d8c2518a1b0a750266c,PodSandboxId:05bf9be9323da4b23832de3954969460c22ee4c80e104a595aff76fafd9f9ffc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793236464954507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wr22x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51e6d58d-7a5a-4739-94de-c53a8c8247ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a52f58a219a7b7c9a685b2223778247841c8895a0105e7248189c429be1df80,PodSandboxId:1c493db8f92176cd7305e2526fd6832d03a259d38f5491f3ec73ceb7669183a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793236326957951,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4snjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: ee9574b0-7c13-4fd0-b268-47bef0687b7c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c886354b0582999efa023713e31afbf5cc13ad3657fa2eac0228f85b5f645388,PodSandboxId:0aa07862419dfe0021db43b42aae875da5abd84a21bce96caec43b3bf9af9611,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733793235653961851,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mcrmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffc0f612-5484-46b4-9515-41e0a981287f,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f83dfbd84c1e473b83ee7d3618062bfdff726c45070bc6520caf7b97c3a3da,PodSandboxId:2311235619d5dc1fc5480e3b7b860c83e828acb82dcf3e09c115588bf7f425d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:173379322487969794
9,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea8ae8fc71b7d3a1e6a588ea52551088,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33af2665f03e95742ddd00f7788a2150f61bc045cd8ca9eb63be8b85cb41726f,PodSandboxId:5b8234923abb6fcc6ec3f105e60233f9ba0c304c166a3c7cebe4e9f9de14ba49,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:17337932249
18392555,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1601ef4c33ab24cae77f791aa4dae7ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a420a8c6b0391831013fbbe1a0a122bba6ea40169623e539dedd548363dd88,PodSandboxId:67bd39b722fbf512b20d685b7b41277541aedaad6037d72aa99e33f8c1d9a817,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17337
93224877343509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60cb8ec79d285fd901584e8d98fd0fd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d394dc046928540d723c13d1536ec41148a447216cd63353ea6189cc6a73951,PodSandboxId:f046688426640edd97a62aa36b96c10bb4bd5d299f6972fc61da215c858bdfd4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733793224818151233,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4df4d42b9990234c537f747b7c23c1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d2e60f0d4eb9452de688e92d066a121751141264f0919d60bd7af5e3836ae79,PodSandboxId:cabe55eda9171e4354e5bdbfaab8b448971681d8faafe5e81525f08084d9d69d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733792944791162309,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4df4d42b9990234c537f747b7c23c1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be8828dc-4057-4f35-a5c5-9cbd43ca8616 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:23:07 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:23:07.791041895Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=84d5c88a-2f1d-4721-9e2a-c0cc9961c64c name=/runtime.v1.RuntimeService/Version
	Dec 10 01:23:07 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:23:07.791126074Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=84d5c88a-2f1d-4721-9e2a-c0cc9961c64c name=/runtime.v1.RuntimeService/Version
	Dec 10 01:23:07 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:23:07.792472738Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=02bc40eb-a271-4806-8cef-2d01e5af420a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:23:07 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:23:07.792919521Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793787792894581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=02bc40eb-a271-4806-8cef-2d01e5af420a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:23:07 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:23:07.793551688Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d00611c-ba41-4408-8e4b-83ee29fc1492 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:23:07 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:23:07.793653325Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d00611c-ba41-4408-8e4b-83ee29fc1492 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:23:07 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:23:07.794002029Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52a45c139cf7b76bcdc156fe228ab6016f0aef5ca7255d7bb176f4a6e1e807ed,PodSandboxId:d77aa12393140c588f18d78e635b3238fa16dda524d42fa9d828f5bff7df347a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733793236880136802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06a31677-c5d7-4380-80d3-ec80b787f570,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6372d6d257a7bab18d3d0e3674bc099aeb54a633e485d8c2518a1b0a750266c,PodSandboxId:05bf9be9323da4b23832de3954969460c22ee4c80e104a595aff76fafd9f9ffc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793236464954507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wr22x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51e6d58d-7a5a-4739-94de-c53a8c8247ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a52f58a219a7b7c9a685b2223778247841c8895a0105e7248189c429be1df80,PodSandboxId:1c493db8f92176cd7305e2526fd6832d03a259d38f5491f3ec73ceb7669183a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793236326957951,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4snjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: ee9574b0-7c13-4fd0-b268-47bef0687b7c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c886354b0582999efa023713e31afbf5cc13ad3657fa2eac0228f85b5f645388,PodSandboxId:0aa07862419dfe0021db43b42aae875da5abd84a21bce96caec43b3bf9af9611,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733793235653961851,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mcrmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffc0f612-5484-46b4-9515-41e0a981287f,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f83dfbd84c1e473b83ee7d3618062bfdff726c45070bc6520caf7b97c3a3da,PodSandboxId:2311235619d5dc1fc5480e3b7b860c83e828acb82dcf3e09c115588bf7f425d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:173379322487969794
9,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea8ae8fc71b7d3a1e6a588ea52551088,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33af2665f03e95742ddd00f7788a2150f61bc045cd8ca9eb63be8b85cb41726f,PodSandboxId:5b8234923abb6fcc6ec3f105e60233f9ba0c304c166a3c7cebe4e9f9de14ba49,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:17337932249
18392555,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1601ef4c33ab24cae77f791aa4dae7ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a420a8c6b0391831013fbbe1a0a122bba6ea40169623e539dedd548363dd88,PodSandboxId:67bd39b722fbf512b20d685b7b41277541aedaad6037d72aa99e33f8c1d9a817,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17337
93224877343509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60cb8ec79d285fd901584e8d98fd0fd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d394dc046928540d723c13d1536ec41148a447216cd63353ea6189cc6a73951,PodSandboxId:f046688426640edd97a62aa36b96c10bb4bd5d299f6972fc61da215c858bdfd4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733793224818151233,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4df4d42b9990234c537f747b7c23c1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d2e60f0d4eb9452de688e92d066a121751141264f0919d60bd7af5e3836ae79,PodSandboxId:cabe55eda9171e4354e5bdbfaab8b448971681d8faafe5e81525f08084d9d69d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733792944791162309,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4df4d42b9990234c537f747b7c23c1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d00611c-ba41-4408-8e4b-83ee29fc1492 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:23:07 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:23:07.831213273Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6adcb273-e74e-44a9-bafa-5d664d6f88cb name=/runtime.v1.RuntimeService/Version
	Dec 10 01:23:07 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:23:07.831347654Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6adcb273-e74e-44a9-bafa-5d664d6f88cb name=/runtime.v1.RuntimeService/Version
	Dec 10 01:23:07 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:23:07.832639374Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=83e0116a-8fe1-4af4-af3e-5e40d6c7485b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:23:07 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:23:07.833092427Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793787833070597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=83e0116a-8fe1-4af4-af3e-5e40d6c7485b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:23:07 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:23:07.833724612Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5cd225b0-a263-4a68-b14e-068f78395797 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:23:07 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:23:07.833837004Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5cd225b0-a263-4a68-b14e-068f78395797 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:23:07 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:23:07.834063908Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52a45c139cf7b76bcdc156fe228ab6016f0aef5ca7255d7bb176f4a6e1e807ed,PodSandboxId:d77aa12393140c588f18d78e635b3238fa16dda524d42fa9d828f5bff7df347a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733793236880136802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06a31677-c5d7-4380-80d3-ec80b787f570,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6372d6d257a7bab18d3d0e3674bc099aeb54a633e485d8c2518a1b0a750266c,PodSandboxId:05bf9be9323da4b23832de3954969460c22ee4c80e104a595aff76fafd9f9ffc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793236464954507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wr22x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51e6d58d-7a5a-4739-94de-c53a8c8247ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a52f58a219a7b7c9a685b2223778247841c8895a0105e7248189c429be1df80,PodSandboxId:1c493db8f92176cd7305e2526fd6832d03a259d38f5491f3ec73ceb7669183a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793236326957951,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4snjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: ee9574b0-7c13-4fd0-b268-47bef0687b7c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c886354b0582999efa023713e31afbf5cc13ad3657fa2eac0228f85b5f645388,PodSandboxId:0aa07862419dfe0021db43b42aae875da5abd84a21bce96caec43b3bf9af9611,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733793235653961851,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mcrmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffc0f612-5484-46b4-9515-41e0a981287f,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f83dfbd84c1e473b83ee7d3618062bfdff726c45070bc6520caf7b97c3a3da,PodSandboxId:2311235619d5dc1fc5480e3b7b860c83e828acb82dcf3e09c115588bf7f425d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:173379322487969794
9,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea8ae8fc71b7d3a1e6a588ea52551088,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33af2665f03e95742ddd00f7788a2150f61bc045cd8ca9eb63be8b85cb41726f,PodSandboxId:5b8234923abb6fcc6ec3f105e60233f9ba0c304c166a3c7cebe4e9f9de14ba49,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:17337932249
18392555,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1601ef4c33ab24cae77f791aa4dae7ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a420a8c6b0391831013fbbe1a0a122bba6ea40169623e539dedd548363dd88,PodSandboxId:67bd39b722fbf512b20d685b7b41277541aedaad6037d72aa99e33f8c1d9a817,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17337
93224877343509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60cb8ec79d285fd901584e8d98fd0fd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d394dc046928540d723c13d1536ec41148a447216cd63353ea6189cc6a73951,PodSandboxId:f046688426640edd97a62aa36b96c10bb4bd5d299f6972fc61da215c858bdfd4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733793224818151233,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4df4d42b9990234c537f747b7c23c1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d2e60f0d4eb9452de688e92d066a121751141264f0919d60bd7af5e3836ae79,PodSandboxId:cabe55eda9171e4354e5bdbfaab8b448971681d8faafe5e81525f08084d9d69d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733792944791162309,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4df4d42b9990234c537f747b7c23c1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5cd225b0-a263-4a68-b14e-068f78395797 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	52a45c139cf7b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   d77aa12393140       storage-provisioner
	f6372d6d257a7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   05bf9be9323da       coredns-7c65d6cfc9-wr22x
	5a52f58a219a7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   1c493db8f9217       coredns-7c65d6cfc9-4snjr
	c886354b05829       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   0aa07862419df       kube-proxy-mcrmk
	33af2665f03e9       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   2                   5b8234923abb6       kube-controller-manager-default-k8s-diff-port-901295
	a5f83dfbd84c1       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   2311235619d5d       kube-scheduler-default-k8s-diff-port-901295
	e4a420a8c6b03       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   67bd39b722fbf       etcd-default-k8s-diff-port-901295
	8d394dc046928       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            2                   f046688426640       kube-apiserver-default-k8s-diff-port-901295
	9d2e60f0d4eb9       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            1                   cabe55eda9171       kube-apiserver-default-k8s-diff-port-901295
	
	
	==> coredns [5a52f58a219a7b7c9a685b2223778247841c8895a0105e7248189c429be1df80] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [f6372d6d257a7bab18d3d0e3674bc099aeb54a633e485d8c2518a1b0a750266c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-901295
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-901295
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=default-k8s-diff-port-901295
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_10T01_13_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 01:13:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-901295
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 01:23:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 01:19:05 +0000   Tue, 10 Dec 2024 01:13:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 01:19:05 +0000   Tue, 10 Dec 2024 01:13:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 01:19:05 +0000   Tue, 10 Dec 2024 01:13:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 01:19:05 +0000   Tue, 10 Dec 2024 01:13:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.193
	  Hostname:    default-k8s-diff-port-901295
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ca8ebec5ac643cca4f6efe51370db7b
	  System UUID:                2ca8ebec-5ac6-43cc-a4f6-efe51370db7b
	  Boot ID:                    05788dbc-2bfa-4ea0-bfa7-aafcafe02894
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-4snjr                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 coredns-7c65d6cfc9-wr22x                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 etcd-default-k8s-diff-port-901295                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-901295             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-901295    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-proxy-mcrmk                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-scheduler-default-k8s-diff-port-901295             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 metrics-server-6867b74b74-rlg4g                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m12s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m11s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m24s (x8 over 9m24s)  kubelet          Node default-k8s-diff-port-901295 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m24s (x8 over 9m24s)  kubelet          Node default-k8s-diff-port-901295 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m24s (x7 over 9m24s)  kubelet          Node default-k8s-diff-port-901295 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m19s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m18s                  kubelet          Node default-k8s-diff-port-901295 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s                  kubelet          Node default-k8s-diff-port-901295 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s                  kubelet          Node default-k8s-diff-port-901295 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m14s                  node-controller  Node default-k8s-diff-port-901295 event: Registered Node default-k8s-diff-port-901295 in Controller
	
	
	==> dmesg <==
	[  +0.056298] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042060] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.062384] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.026226] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.441267] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.198852] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.057824] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061100] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.174382] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.136630] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.276898] systemd-fstab-generator[708]: Ignoring "noauto" option for root device
	[Dec10 01:09] systemd-fstab-generator[800]: Ignoring "noauto" option for root device
	[  +1.851621] systemd-fstab-generator[920]: Ignoring "noauto" option for root device
	[  +0.066667] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.507222] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.901427] kauditd_printk_skb: 85 callbacks suppressed
	[Dec10 01:13] systemd-fstab-generator[2625]: Ignoring "noauto" option for root device
	[  +0.061004] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.982050] systemd-fstab-generator[2945]: Ignoring "noauto" option for root device
	[  +0.079152] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.311877] systemd-fstab-generator[3055]: Ignoring "noauto" option for root device
	[  +0.095140] kauditd_printk_skb: 12 callbacks suppressed
	[Dec10 01:14] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [e4a420a8c6b0391831013fbbe1a0a122bba6ea40169623e539dedd548363dd88] <==
	{"level":"info","ts":"2024-12-10T01:13:45.249743Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-10T01:13:45.255582Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"97ba5874d4d591f6","initial-advertise-peer-urls":["https://192.168.39.193:2380"],"listen-peer-urls":["https://192.168.39.193:2380"],"advertise-client-urls":["https://192.168.39.193:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.193:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-10T01:13:45.255620Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-10T01:13:45.249975Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.193:2380"}
	{"level":"info","ts":"2024-12-10T01:13:45.255656Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.193:2380"}
	{"level":"info","ts":"2024-12-10T01:13:46.192391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-10T01:13:46.192514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-10T01:13:46.192552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 received MsgPreVoteResp from 97ba5874d4d591f6 at term 1"}
	{"level":"info","ts":"2024-12-10T01:13:46.192583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 became candidate at term 2"}
	{"level":"info","ts":"2024-12-10T01:13:46.192607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 received MsgVoteResp from 97ba5874d4d591f6 at term 2"}
	{"level":"info","ts":"2024-12-10T01:13:46.192633Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"97ba5874d4d591f6 became leader at term 2"}
	{"level":"info","ts":"2024-12-10T01:13:46.192659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 97ba5874d4d591f6 elected leader 97ba5874d4d591f6 at term 2"}
	{"level":"info","ts":"2024-12-10T01:13:46.194048Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T01:13:46.194224Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"97ba5874d4d591f6","local-member-attributes":"{Name:default-k8s-diff-port-901295 ClientURLs:[https://192.168.39.193:2379]}","request-path":"/0/members/97ba5874d4d591f6/attributes","cluster-id":"9afeb12ac4c1a90a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-10T01:13:46.194426Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-10T01:13:46.194835Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-10T01:13:46.195059Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9afeb12ac4c1a90a","local-member-id":"97ba5874d4d591f6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T01:13:46.195207Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T01:13:46.195248Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T01:13:46.197970Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-10T01:13:46.198283Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-10T01:13:46.198315Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-10T01:13:46.198725Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-10T01:13:46.198879Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-10T01:13:46.199528Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.193:2379"}
	
	
	==> kernel <==
	 01:23:08 up 14 min,  0 users,  load average: 0.07, 0.10, 0.09
	Linux default-k8s-diff-port-901295 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8d394dc046928540d723c13d1536ec41148a447216cd63353ea6189cc6a73951] <==
	E1210 01:18:48.502743       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1210 01:18:48.502910       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 01:18:48.504059       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 01:18:48.504200       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 01:19:48.504738       1 handler_proxy.go:99] no RequestInfo found in the context
	W1210 01:19:48.504759       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 01:19:48.505094       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1210 01:19:48.505150       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 01:19:48.506276       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 01:19:48.506338       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 01:21:48.507230       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 01:21:48.507354       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1210 01:21:48.507250       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 01:21:48.507455       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 01:21:48.508852       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 01:21:48.508922       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [9d2e60f0d4eb9452de688e92d066a121751141264f0919d60bd7af5e3836ae79] <==
	W1210 01:13:40.812227       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.831876       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.851857       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.855226       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.858575       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.859868       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.872920       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.910741       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.925317       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.943956       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.946307       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.949675       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.952930       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.955307       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:41.030380       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:41.064271       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:41.084186       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:41.088560       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:41.146398       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:41.196502       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:41.200994       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:41.219903       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:41.344969       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:41.365856       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:41.522854       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [33af2665f03e95742ddd00f7788a2150f61bc045cd8ca9eb63be8b85cb41726f] <==
	E1210 01:17:54.521238       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:17:54.952832       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:18:24.527292       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:18:24.960126       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:18:54.536082       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:18:54.967559       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 01:19:05.950319       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-901295"
	E1210 01:19:24.543343       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:19:24.975326       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:19:54.550574       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:19:54.982892       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 01:19:55.916856       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="244.559µs"
	I1210 01:20:08.916347       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="47.779µs"
	E1210 01:20:24.557025       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:20:24.990196       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:20:54.564961       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:20:54.997500       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:21:24.570864       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:21:25.004906       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:21:54.577603       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:21:55.016753       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:22:24.583393       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:22:25.024305       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:22:54.590127       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:22:55.032086       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c886354b0582999efa023713e31afbf5cc13ad3657fa2eac0228f85b5f645388] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1210 01:13:56.178804       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1210 01:13:56.198101       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.193"]
	E1210 01:13:56.198189       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 01:13:56.306208       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1210 01:13:56.306292       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 01:13:56.306346       1 server_linux.go:169] "Using iptables Proxier"
	I1210 01:13:56.333033       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 01:13:56.333475       1 server.go:483] "Version info" version="v1.31.2"
	I1210 01:13:56.333497       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 01:13:56.370970       1 config.go:105] "Starting endpoint slice config controller"
	I1210 01:13:56.370994       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1210 01:13:56.371692       1 config.go:328] "Starting node config controller"
	I1210 01:13:56.371706       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1210 01:13:56.379046       1 config.go:199] "Starting service config controller"
	I1210 01:13:56.379207       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1210 01:13:56.471354       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1210 01:13:56.480063       1 shared_informer.go:320] Caches are synced for service config
	I1210 01:13:56.489250       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a5f83dfbd84c1e473b83ee7d3618062bfdff726c45070bc6520caf7b97c3a3da] <==
	W1210 01:13:47.551437       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1210 01:13:47.551972       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:47.551477       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1210 01:13:47.551990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:47.551512       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1210 01:13:47.552024       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:47.551587       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1210 01:13:47.552041       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:48.393042       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1210 01:13:48.393184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:48.543009       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1210 01:13:48.543093       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:48.634559       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1210 01:13:48.634632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:48.637892       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1210 01:13:48.638043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:48.656045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1210 01:13:48.656122       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:48.706062       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1210 01:13:48.706109       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:48.736707       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1210 01:13:48.736753       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1210 01:13:48.762534       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1210 01:13:48.762650       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1210 01:13:50.538326       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 10 01:21:51 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:21:51.899289    2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rlg4g" podUID="9aae955e-136b-4dbb-a5a5-f7490309bf4e"
	Dec 10 01:22:00 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:22:00.068714    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793720068462293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:00 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:22:00.068873    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793720068462293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:03 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:22:03.898457    2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rlg4g" podUID="9aae955e-136b-4dbb-a5a5-f7490309bf4e"
	Dec 10 01:22:10 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:22:10.069934    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793730069651299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:10 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:22:10.069970    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793730069651299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:17 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:22:17.898567    2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rlg4g" podUID="9aae955e-136b-4dbb-a5a5-f7490309bf4e"
	Dec 10 01:22:20 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:22:20.073645    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793740073380508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:20 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:22:20.073691    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793740073380508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:30 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:22:30.078726    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793750078179354,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:30 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:22:30.078909    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793750078179354,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:31 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:22:31.898638    2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rlg4g" podUID="9aae955e-136b-4dbb-a5a5-f7490309bf4e"
	Dec 10 01:22:40 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:22:40.081151    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793760080484111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:40 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:22:40.081189    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793760080484111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:46 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:22:46.898216    2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rlg4g" podUID="9aae955e-136b-4dbb-a5a5-f7490309bf4e"
	Dec 10 01:22:49 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:22:49.917608    2952 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 10 01:22:49 default-k8s-diff-port-901295 kubelet[2952]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 10 01:22:49 default-k8s-diff-port-901295 kubelet[2952]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 10 01:22:49 default-k8s-diff-port-901295 kubelet[2952]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 10 01:22:49 default-k8s-diff-port-901295 kubelet[2952]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 10 01:22:50 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:22:50.082503    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793770082069058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:22:50 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:22:50.082587    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793770082069058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:23:00 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:23:00.084419    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793780083701776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:23:00 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:23:00.084817    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793780083701776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:23:01 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:23:01.899951    2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rlg4g" podUID="9aae955e-136b-4dbb-a5a5-f7490309bf4e"
	
	
	==> storage-provisioner [52a45c139cf7b76bcdc156fe228ab6016f0aef5ca7255d7bb176f4a6e1e807ed] <==
	I1210 01:13:56.992459       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 01:13:57.016061       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 01:13:57.016204       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1210 01:13:57.066399       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 01:13:57.073848       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eb814050-68d7-4c7a-9b72-ae74e9338a4f", APIVersion:"v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-901295_bbbdc540-9b50-4a68-bfcb-2088714f7baa became leader
	I1210 01:13:57.075043       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-901295_bbbdc540-9b50-4a68-bfcb-2088714f7baa!
	I1210 01:13:57.176002       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-901295_bbbdc540-9b50-4a68-bfcb-2088714f7baa!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-901295 -n default-k8s-diff-port-901295
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-901295 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-rlg4g
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-901295 describe pod metrics-server-6867b74b74-rlg4g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-901295 describe pod metrics-server-6867b74b74-rlg4g: exit status 1 (67.157278ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-rlg4g" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-901295 describe pod metrics-server-6867b74b74-rlg4g: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
E1210 01:18:50.568168   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
E1210 01:20:09.289636   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
E1210 01:20:47.491155   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
E1210 01:25:09.289790   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-094470 -n old-k8s-version-094470
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-094470 -n old-k8s-version-094470: exit status 2 (233.299838ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-094470" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094470 -n old-k8s-version-094470
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094470 -n old-k8s-version-094470: exit status 2 (222.077392ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-094470 logs -n 25
E1210 01:25:47.491330   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-094470 logs -n 25: (1.394085498s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-094470                              | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-options-086522                                 | cert-options-086522          | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 00:58 UTC |
	| start   | -p no-preload-584179                                   | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 01:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-481624                           | kubernetes-upgrade-481624    | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 00:58 UTC |
	| start   | -p embed-certs-274758                                  | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 01:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-290541                              | cert-expiration-290541       | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-290541                              | cert-expiration-290541       | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-371895 | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | disable-driver-mounts-371895                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:02 UTC |
	|         | default-k8s-diff-port-901295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-584179             | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-584179                                   | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-274758            | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-274758                                  | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-901295  | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:02 UTC | 10 Dec 24 01:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:02 UTC |                     |
	|         | default-k8s-diff-port-901295                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-094470        | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-584179                  | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-274758                 | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-584179                                   | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC | 10 Dec 24 01:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-274758                                  | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC | 10 Dec 24 01:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-901295       | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-094470                              | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC | 10 Dec 24 01:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-094470             | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC | 10 Dec 24 01:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-094470                              | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC | 10 Dec 24 01:14 UTC |
	|         | default-k8s-diff-port-901295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/10 01:04:42
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 01:04:42.604554  133282 out.go:345] Setting OutFile to fd 1 ...
	I1210 01:04:42.604645  133282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:04:42.604652  133282 out.go:358] Setting ErrFile to fd 2...
	I1210 01:04:42.604657  133282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:04:42.604818  133282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 01:04:42.605325  133282 out.go:352] Setting JSON to false
	I1210 01:04:42.606230  133282 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10034,"bootTime":1733782649,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 01:04:42.606360  133282 start.go:139] virtualization: kvm guest
	I1210 01:04:42.608505  133282 out.go:177] * [default-k8s-diff-port-901295] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 01:04:42.609651  133282 notify.go:220] Checking for updates...
	I1210 01:04:42.609661  133282 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 01:04:42.610866  133282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 01:04:42.611986  133282 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:04:42.613055  133282 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 01:04:42.614094  133282 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 01:04:42.615160  133282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 01:04:42.616546  133282 config.go:182] Loaded profile config "default-k8s-diff-port-901295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:04:42.616942  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:04:42.617000  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:04:42.631861  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39355
	I1210 01:04:42.632399  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:04:42.632966  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:04:42.632988  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:04:42.633389  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:04:42.633558  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:04:42.633822  133282 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 01:04:42.634105  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:04:42.634139  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:04:42.648371  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38389
	I1210 01:04:42.648775  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:04:42.649217  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:04:42.649238  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:04:42.649580  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:04:42.649752  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:04:42.680926  133282 out.go:177] * Using the kvm2 driver based on existing profile
	I1210 01:04:42.682339  133282 start.go:297] selected driver: kvm2
	I1210 01:04:42.682365  133282 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:04:42.682487  133282 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 01:04:42.683148  133282 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 01:04:42.683220  133282 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 01:04:42.697586  133282 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1210 01:04:42.697938  133282 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:04:42.697970  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:04:42.698011  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:04:42.698042  133282 start.go:340] cluster config:
	{Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:04:42.698139  133282 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 01:04:42.699685  133282 out.go:177] * Starting "default-k8s-diff-port-901295" primary control-plane node in "default-k8s-diff-port-901295" cluster
	I1210 01:04:39.721352  133241 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1210 01:04:39.721383  133241 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1210 01:04:39.721392  133241 cache.go:56] Caching tarball of preloaded images
	I1210 01:04:39.721455  133241 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 01:04:39.721464  133241 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1210 01:04:39.721545  133241 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/config.json ...
	I1210 01:04:39.721707  133241 start.go:360] acquireMachinesLock for old-k8s-version-094470: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 01:04:44.574793  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:04:42.700760  133282 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:04:42.700790  133282 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1210 01:04:42.700799  133282 cache.go:56] Caching tarball of preloaded images
	I1210 01:04:42.700867  133282 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 01:04:42.700878  133282 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 01:04:42.700976  133282 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/config.json ...
	I1210 01:04:42.701136  133282 start.go:360] acquireMachinesLock for default-k8s-diff-port-901295: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 01:04:50.654849  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:04:53.726828  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:04:59.806818  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:02.878819  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:08.958855  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:12.030796  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:18.110838  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:21.182849  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:27.262801  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:30.334793  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:36.414830  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:39.486794  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:45.566825  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:48.639043  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:54.718789  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:57.790796  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:03.870824  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:06.942805  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:13.023037  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:16.094961  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:22.174798  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:25.246892  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:31.326818  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:34.398846  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:40.478809  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:43.550800  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:49.630777  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:52.702808  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:58.783007  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:01.854776  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:07.934835  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:11.006837  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:17.086805  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:20.158819  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:26.238836  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:29.311060  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:35.390827  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:38.462976  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:44.542806  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:47.614800  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:53.694819  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:56.766790  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:59.770632  132693 start.go:364] duration metric: took 4m32.843409632s to acquireMachinesLock for "embed-certs-274758"
	I1210 01:07:59.770698  132693 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:07:59.770705  132693 fix.go:54] fixHost starting: 
	I1210 01:07:59.771174  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:07:59.771226  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:07:59.787289  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45723
	I1210 01:07:59.787787  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:07:59.788234  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:07:59.788258  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:07:59.788645  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:07:59.788824  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:07:59.788958  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:07:59.790595  132693 fix.go:112] recreateIfNeeded on embed-certs-274758: state=Stopped err=<nil>
	I1210 01:07:59.790631  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	W1210 01:07:59.790790  132693 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:07:59.792515  132693 out.go:177] * Restarting existing kvm2 VM for "embed-certs-274758" ...
	I1210 01:07:59.793607  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Start
	I1210 01:07:59.793771  132693 main.go:141] libmachine: (embed-certs-274758) Ensuring networks are active...
	I1210 01:07:59.794532  132693 main.go:141] libmachine: (embed-certs-274758) Ensuring network default is active
	I1210 01:07:59.794864  132693 main.go:141] libmachine: (embed-certs-274758) Ensuring network mk-embed-certs-274758 is active
	I1210 01:07:59.795317  132693 main.go:141] libmachine: (embed-certs-274758) Getting domain xml...
	I1210 01:07:59.796099  132693 main.go:141] libmachine: (embed-certs-274758) Creating domain...
	I1210 01:08:00.982632  132693 main.go:141] libmachine: (embed-certs-274758) Waiting to get IP...
	I1210 01:08:00.983591  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:00.984037  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:00.984077  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:00.984002  133990 retry.go:31] will retry after 285.753383ms: waiting for machine to come up
	I1210 01:08:01.272035  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:01.272490  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:01.272514  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:01.272423  133990 retry.go:31] will retry after 309.245833ms: waiting for machine to come up
	I1210 01:08:01.582873  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:01.583336  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:01.583382  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:01.583288  133990 retry.go:31] will retry after 451.016986ms: waiting for machine to come up
	I1210 01:07:59.768336  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:07:59.768370  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:07:59.768666  132605 buildroot.go:166] provisioning hostname "no-preload-584179"
	I1210 01:07:59.768702  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:07:59.768894  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:07:59.770491  132605 machine.go:96] duration metric: took 4m37.429107505s to provisionDockerMachine
	I1210 01:07:59.770535  132605 fix.go:56] duration metric: took 4m37.448303416s for fixHost
	I1210 01:07:59.770542  132605 start.go:83] releasing machines lock for "no-preload-584179", held for 4m37.448340626s
	W1210 01:07:59.770589  132605 start.go:714] error starting host: provision: host is not running
	W1210 01:07:59.770743  132605 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1210 01:07:59.770759  132605 start.go:729] Will try again in 5 seconds ...
	I1210 01:08:02.035970  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:02.036421  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:02.036443  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:02.036382  133990 retry.go:31] will retry after 408.436756ms: waiting for machine to come up
	I1210 01:08:02.445970  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:02.446515  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:02.446550  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:02.446445  133990 retry.go:31] will retry after 612.819219ms: waiting for machine to come up
	I1210 01:08:03.061377  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:03.061850  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:03.061879  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:03.061795  133990 retry.go:31] will retry after 867.345457ms: waiting for machine to come up
	I1210 01:08:03.930866  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:03.931316  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:03.931340  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:03.931259  133990 retry.go:31] will retry after 758.429736ms: waiting for machine to come up
	I1210 01:08:04.691061  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:04.691480  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:04.691511  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:04.691430  133990 retry.go:31] will retry after 1.278419765s: waiting for machine to come up
	I1210 01:08:05.972206  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:05.972645  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:05.972677  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:05.972596  133990 retry.go:31] will retry after 1.726404508s: waiting for machine to come up
	I1210 01:08:04.770968  132605 start.go:360] acquireMachinesLock for no-preload-584179: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 01:08:07.700170  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:07.700593  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:07.700615  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:07.700544  133990 retry.go:31] will retry after 2.286681333s: waiting for machine to come up
	I1210 01:08:09.989072  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:09.989424  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:09.989447  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:09.989383  133990 retry.go:31] will retry after 2.723565477s: waiting for machine to come up
	I1210 01:08:12.716204  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:12.716656  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:12.716680  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:12.716618  133990 retry.go:31] will retry after 3.619683155s: waiting for machine to come up
	I1210 01:08:16.338854  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.339271  132693 main.go:141] libmachine: (embed-certs-274758) Found IP for machine: 192.168.72.76
	I1210 01:08:16.339301  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has current primary IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.339306  132693 main.go:141] libmachine: (embed-certs-274758) Reserving static IP address...
	I1210 01:08:16.339656  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "embed-certs-274758", mac: "52:54:00:d3:3c:b1", ip: "192.168.72.76"} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.339683  132693 main.go:141] libmachine: (embed-certs-274758) DBG | skip adding static IP to network mk-embed-certs-274758 - found existing host DHCP lease matching {name: "embed-certs-274758", mac: "52:54:00:d3:3c:b1", ip: "192.168.72.76"}
	I1210 01:08:16.339695  132693 main.go:141] libmachine: (embed-certs-274758) Reserved static IP address: 192.168.72.76
	I1210 01:08:16.339703  132693 main.go:141] libmachine: (embed-certs-274758) Waiting for SSH to be available...
	I1210 01:08:16.339715  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Getting to WaitForSSH function...
	I1210 01:08:16.341531  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.341776  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.341804  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.341963  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Using SSH client type: external
	I1210 01:08:16.341995  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa (-rw-------)
	I1210 01:08:16.342030  132693 main.go:141] libmachine: (embed-certs-274758) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.76 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:08:16.342047  132693 main.go:141] libmachine: (embed-certs-274758) DBG | About to run SSH command:
	I1210 01:08:16.342061  132693 main.go:141] libmachine: (embed-certs-274758) DBG | exit 0
	I1210 01:08:16.465930  132693 main.go:141] libmachine: (embed-certs-274758) DBG | SSH cmd err, output: <nil>: 
	I1210 01:08:16.466310  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetConfigRaw
	I1210 01:08:16.466921  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:16.469152  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.469472  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.469501  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.469754  132693 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/config.json ...
	I1210 01:08:16.469962  132693 machine.go:93] provisionDockerMachine start ...
	I1210 01:08:16.469982  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:16.470197  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.472368  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.472740  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.472765  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.472888  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.473052  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.473222  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.473325  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.473500  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:16.473737  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:16.473752  132693 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:08:16.581932  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:08:16.581963  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetMachineName
	I1210 01:08:16.582183  132693 buildroot.go:166] provisioning hostname "embed-certs-274758"
	I1210 01:08:16.582213  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetMachineName
	I1210 01:08:16.582412  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.584799  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.585092  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.585124  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.585264  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.585415  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.585568  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.585701  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.585836  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:16.586010  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:16.586026  132693 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-274758 && echo "embed-certs-274758" | sudo tee /etc/hostname
	I1210 01:08:16.707226  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-274758
	
	I1210 01:08:16.707260  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.709905  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.710192  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.710223  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.710428  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.710632  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.710828  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.710957  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.711127  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:16.711339  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:16.711356  132693 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-274758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-274758/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-274758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:08:17.578801  133241 start.go:364] duration metric: took 3m37.857041189s to acquireMachinesLock for "old-k8s-version-094470"
	I1210 01:08:17.578868  133241 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:08:17.578876  133241 fix.go:54] fixHost starting: 
	I1210 01:08:17.579295  133241 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:08:17.579353  133241 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:08:17.595770  133241 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46811
	I1210 01:08:17.596141  133241 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:08:17.596669  133241 main.go:141] libmachine: Using API Version  1
	I1210 01:08:17.596693  133241 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:08:17.597084  133241 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:08:17.597263  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:17.597405  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetState
	I1210 01:08:17.598931  133241 fix.go:112] recreateIfNeeded on old-k8s-version-094470: state=Stopped err=<nil>
	I1210 01:08:17.598957  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	W1210 01:08:17.599124  133241 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:08:17.600962  133241 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-094470" ...
	I1210 01:08:16.831001  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:08:16.831032  132693 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:08:16.831063  132693 buildroot.go:174] setting up certificates
	I1210 01:08:16.831074  132693 provision.go:84] configureAuth start
	I1210 01:08:16.831084  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetMachineName
	I1210 01:08:16.831362  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:16.833916  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.834282  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.834318  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.834446  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.836770  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.837061  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.837083  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.837216  132693 provision.go:143] copyHostCerts
	I1210 01:08:16.837284  132693 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:08:16.837303  132693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:08:16.837357  132693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:08:16.837447  132693 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:08:16.837455  132693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:08:16.837478  132693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:08:16.837528  132693 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:08:16.837535  132693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:08:16.837554  132693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:08:16.837609  132693 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.embed-certs-274758 san=[127.0.0.1 192.168.72.76 embed-certs-274758 localhost minikube]
	I1210 01:08:16.953590  132693 provision.go:177] copyRemoteCerts
	I1210 01:08:16.953649  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:08:16.953676  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.956012  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.956347  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.956384  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.956544  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.956703  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.956828  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.956951  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.039674  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:08:17.061125  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1210 01:08:17.082062  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 01:08:17.102519  132693 provision.go:87] duration metric: took 271.416512ms to configureAuth
	I1210 01:08:17.102554  132693 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:08:17.102745  132693 config.go:182] Loaded profile config "embed-certs-274758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:08:17.102858  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.105469  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.105818  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.105849  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.105976  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.106169  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.106326  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.106468  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.106639  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:17.106804  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:17.106817  132693 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:08:17.339841  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:08:17.339873  132693 machine.go:96] duration metric: took 869.895063ms to provisionDockerMachine
	I1210 01:08:17.339888  132693 start.go:293] postStartSetup for "embed-certs-274758" (driver="kvm2")
	I1210 01:08:17.339902  132693 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:08:17.339921  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.340256  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:08:17.340295  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.342633  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.342947  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.342973  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.343127  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.343294  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.343441  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.343545  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.428245  132693 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:08:17.432486  132693 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:08:17.432507  132693 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:08:17.432568  132693 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:08:17.432650  132693 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:08:17.432756  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:08:17.441892  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:17.464515  132693 start.go:296] duration metric: took 124.610801ms for postStartSetup
	I1210 01:08:17.464558  132693 fix.go:56] duration metric: took 17.693851707s for fixHost
	I1210 01:08:17.464592  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.467173  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.467470  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.467494  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.467622  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.467829  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.467976  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.468111  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.468253  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:17.468418  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:17.468429  132693 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:08:17.578630  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792897.551711245
	
	I1210 01:08:17.578653  132693 fix.go:216] guest clock: 1733792897.551711245
	I1210 01:08:17.578662  132693 fix.go:229] Guest: 2024-12-10 01:08:17.551711245 +0000 UTC Remote: 2024-12-10 01:08:17.464575547 +0000 UTC m=+290.672639525 (delta=87.135698ms)
	I1210 01:08:17.578690  132693 fix.go:200] guest clock delta is within tolerance: 87.135698ms
	I1210 01:08:17.578697  132693 start.go:83] releasing machines lock for "embed-certs-274758", held for 17.808018239s
	I1210 01:08:17.578727  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.578978  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:17.581740  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.582079  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.582105  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.582272  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.582792  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.582970  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.583053  132693 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:08:17.583108  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.583173  132693 ssh_runner.go:195] Run: cat /version.json
	I1210 01:08:17.583203  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.585727  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586056  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586096  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.586121  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586268  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.586447  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.586496  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.586525  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586661  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.586665  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.586853  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.586851  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.587016  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.587145  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.689525  132693 ssh_runner.go:195] Run: systemctl --version
	I1210 01:08:17.696586  132693 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:08:17.838483  132693 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:08:17.844291  132693 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:08:17.844381  132693 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:08:17.858838  132693 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:08:17.858864  132693 start.go:495] detecting cgroup driver to use...
	I1210 01:08:17.858926  132693 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:08:17.875144  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:08:17.887694  132693 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:08:17.887750  132693 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:08:17.900263  132693 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:08:17.916462  132693 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:08:18.050837  132693 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:08:18.237065  132693 docker.go:233] disabling docker service ...
	I1210 01:08:18.237134  132693 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:08:18.254596  132693 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:08:18.267028  132693 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:08:18.384379  132693 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:08:18.511930  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:08:18.525729  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:08:18.544642  132693 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 01:08:18.544693  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.555569  132693 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:08:18.555629  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.565952  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.575954  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.589571  132693 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:08:18.604400  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.615079  132693 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.631811  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.641877  132693 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:08:18.651229  132693 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:08:18.651284  132693 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:08:18.663922  132693 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:08:18.673755  132693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:18.804115  132693 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:08:18.902371  132693 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:08:18.902453  132693 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:08:18.906806  132693 start.go:563] Will wait 60s for crictl version
	I1210 01:08:18.906876  132693 ssh_runner.go:195] Run: which crictl
	I1210 01:08:18.910409  132693 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:08:18.957196  132693 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:08:18.957293  132693 ssh_runner.go:195] Run: crio --version
	I1210 01:08:18.983326  132693 ssh_runner.go:195] Run: crio --version
	I1210 01:08:19.021374  132693 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 01:08:17.602512  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .Start
	I1210 01:08:17.602729  133241 main.go:141] libmachine: (old-k8s-version-094470) Ensuring networks are active...
	I1210 01:08:17.603418  133241 main.go:141] libmachine: (old-k8s-version-094470) Ensuring network default is active
	I1210 01:08:17.603788  133241 main.go:141] libmachine: (old-k8s-version-094470) Ensuring network mk-old-k8s-version-094470 is active
	I1210 01:08:17.604284  133241 main.go:141] libmachine: (old-k8s-version-094470) Getting domain xml...
	I1210 01:08:17.605020  133241 main.go:141] libmachine: (old-k8s-version-094470) Creating domain...
	I1210 01:08:18.869767  133241 main.go:141] libmachine: (old-k8s-version-094470) Waiting to get IP...
	I1210 01:08:18.870786  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:18.871226  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:18.871282  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:18.871190  134112 retry.go:31] will retry after 260.195661ms: waiting for machine to come up
	I1210 01:08:19.132624  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:19.133091  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:19.133113  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:19.133034  134112 retry.go:31] will retry after 241.852579ms: waiting for machine to come up
	I1210 01:08:19.376814  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:19.377485  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:19.377520  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:19.377420  134112 retry.go:31] will retry after 410.574957ms: waiting for machine to come up
	I1210 01:08:19.023096  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:19.026231  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:19.026697  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:19.026740  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:19.026981  132693 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1210 01:08:19.031042  132693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:19.043510  132693 kubeadm.go:883] updating cluster {Name:embed-certs-274758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-274758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:08:19.043679  132693 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:08:19.043747  132693 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:19.075804  132693 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 01:08:19.075875  132693 ssh_runner.go:195] Run: which lz4
	I1210 01:08:19.079498  132693 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 01:08:19.083365  132693 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 01:08:19.083394  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1210 01:08:20.282126  132693 crio.go:462] duration metric: took 1.202670831s to copy over tarball
	I1210 01:08:20.282224  132693 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 01:08:19.790282  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:19.790868  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:19.790898  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:19.790828  134112 retry.go:31] will retry after 535.183165ms: waiting for machine to come up
	I1210 01:08:20.327434  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:20.327936  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:20.327972  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:20.327862  134112 retry.go:31] will retry after 729.193633ms: waiting for machine to come up
	I1210 01:08:21.058815  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:21.059274  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:21.059302  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:21.059224  134112 retry.go:31] will retry after 578.788415ms: waiting for machine to come up
	I1210 01:08:21.640036  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:21.640572  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:21.640604  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:21.640523  134112 retry.go:31] will retry after 1.113559472s: waiting for machine to come up
	I1210 01:08:22.755259  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:22.755716  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:22.755741  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:22.755681  134112 retry.go:31] will retry after 940.416935ms: waiting for machine to come up
	I1210 01:08:23.698216  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:23.698652  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:23.698684  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:23.698608  134112 retry.go:31] will retry after 1.575038679s: waiting for machine to come up
	I1210 01:08:22.359701  132693 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.077440918s)
	I1210 01:08:22.359757  132693 crio.go:469] duration metric: took 2.077602088s to extract the tarball
	I1210 01:08:22.359770  132693 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 01:08:22.404915  132693 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:22.444497  132693 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 01:08:22.444531  132693 cache_images.go:84] Images are preloaded, skipping loading
	I1210 01:08:22.444543  132693 kubeadm.go:934] updating node { 192.168.72.76 8443 v1.31.2 crio true true} ...
	I1210 01:08:22.444702  132693 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-274758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-274758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:08:22.444801  132693 ssh_runner.go:195] Run: crio config
	I1210 01:08:22.484278  132693 cni.go:84] Creating CNI manager for ""
	I1210 01:08:22.484301  132693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:08:22.484311  132693 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:08:22.484345  132693 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.76 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-274758 NodeName:embed-certs-274758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.76"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.76 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 01:08:22.484508  132693 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.76
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-274758"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.76"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.76"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:08:22.484573  132693 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 01:08:22.493746  132693 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:08:22.493827  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:08:22.503898  132693 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1210 01:08:22.520349  132693 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:08:22.536653  132693 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1210 01:08:22.553389  132693 ssh_runner.go:195] Run: grep 192.168.72.76	control-plane.minikube.internal$ /etc/hosts
	I1210 01:08:22.556933  132693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.76	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:22.569060  132693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:22.709124  132693 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:08:22.728316  132693 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758 for IP: 192.168.72.76
	I1210 01:08:22.728342  132693 certs.go:194] generating shared ca certs ...
	I1210 01:08:22.728382  132693 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:08:22.728564  132693 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:08:22.728619  132693 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:08:22.728633  132693 certs.go:256] generating profile certs ...
	I1210 01:08:22.728764  132693 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/client.key
	I1210 01:08:22.728852  132693 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/apiserver.key.ec69c041
	I1210 01:08:22.728906  132693 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/proxy-client.key
	I1210 01:08:22.729067  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:08:22.729121  132693 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:08:22.729144  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:08:22.729186  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:08:22.729223  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:08:22.729254  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:08:22.729313  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:22.730259  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:08:22.786992  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:08:22.813486  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:08:22.840236  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:08:22.870078  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1210 01:08:22.896484  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 01:08:22.917547  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:08:22.940550  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 01:08:22.964784  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:08:22.987389  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:08:23.009860  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:08:23.032300  132693 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:08:23.048611  132693 ssh_runner.go:195] Run: openssl version
	I1210 01:08:23.053927  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:08:23.064731  132693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:08:23.068872  132693 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:08:23.068917  132693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:08:23.074207  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:08:23.085278  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:08:23.096087  132693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:23.100106  132693 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:23.100155  132693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:23.105408  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:08:23.114862  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:08:23.124112  132693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:08:23.127915  132693 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:08:23.127958  132693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:08:23.132972  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:08:23.142672  132693 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:08:23.146554  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:08:23.152071  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:08:23.157606  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:08:23.162974  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:08:23.168059  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:08:23.173354  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:08:23.178612  132693 kubeadm.go:392] StartCluster: {Name:embed-certs-274758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-274758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:08:23.178733  132693 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:08:23.178788  132693 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:23.214478  132693 cri.go:89] found id: ""
	I1210 01:08:23.214545  132693 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:08:23.223871  132693 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:08:23.223897  132693 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:08:23.223956  132693 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:08:23.232839  132693 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:08:23.233836  132693 kubeconfig.go:125] found "embed-certs-274758" server: "https://192.168.72.76:8443"
	I1210 01:08:23.235958  132693 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:08:23.244484  132693 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.76
	I1210 01:08:23.244517  132693 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:08:23.244529  132693 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:08:23.244578  132693 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:23.282997  132693 cri.go:89] found id: ""
	I1210 01:08:23.283063  132693 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:08:23.298971  132693 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:08:23.307664  132693 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:08:23.307690  132693 kubeadm.go:157] found existing configuration files:
	
	I1210 01:08:23.307739  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:08:23.316208  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:08:23.316259  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:08:23.324410  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:08:23.332254  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:08:23.332303  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:08:23.340482  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:08:23.348584  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:08:23.348636  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:08:23.356760  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:08:23.364508  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:08:23.364564  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:08:23.372644  132693 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:08:23.380791  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:23.481384  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.558104  132693 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.076675674s)
	I1210 01:08:24.558155  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.743002  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.812833  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.910903  132693 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:08:24.911007  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:25.411815  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:25.911457  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:26.411340  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:25.276751  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:25.277027  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:25.277058  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:25.276996  134112 retry.go:31] will retry after 1.531276871s: waiting for machine to come up
	I1210 01:08:26.809860  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:26.810332  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:26.810365  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:26.810270  134112 retry.go:31] will retry after 2.029725217s: waiting for machine to come up
	I1210 01:08:28.842419  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:28.842945  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:28.842979  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:28.842895  134112 retry.go:31] will retry after 2.777752063s: waiting for machine to come up
	I1210 01:08:26.911681  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:26.925244  132693 api_server.go:72] duration metric: took 2.014341005s to wait for apiserver process to appear ...
	I1210 01:08:26.925276  132693 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:08:26.925307  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:29.461167  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:08:29.461199  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:08:29.461221  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:29.490907  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:08:29.490935  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:08:29.925947  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:29.938161  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:08:29.938197  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:08:30.425822  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:30.448700  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:08:30.448741  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:08:30.926368  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:30.930770  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 200:
	ok
	I1210 01:08:30.936664  132693 api_server.go:141] control plane version: v1.31.2
	I1210 01:08:30.936706  132693 api_server.go:131] duration metric: took 4.011421056s to wait for apiserver health ...
	I1210 01:08:30.936719  132693 cni.go:84] Creating CNI manager for ""
	I1210 01:08:30.936731  132693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:08:30.938509  132693 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:08:30.939651  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:08:30.949390  132693 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:08:30.973739  132693 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:08:30.988397  132693 system_pods.go:59] 8 kube-system pods found
	I1210 01:08:30.988441  132693 system_pods.go:61] "coredns-7c65d6cfc9-g98k2" [4358eb5a-fa28-405d-b6a4-66d232c1b060] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 01:08:30.988451  132693 system_pods.go:61] "etcd-embed-certs-274758" [11343776-d268-428f-9af8-4d20e4c1dda4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 01:08:30.988461  132693 system_pods.go:61] "kube-apiserver-embed-certs-274758" [c60d7a8e-e029-47ec-8f9d-5531aaeeb595] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 01:08:30.988471  132693 system_pods.go:61] "kube-controller-manager-embed-certs-274758" [53c0e257-c3c1-410b-8ce5-8350530160c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 01:08:30.988478  132693 system_pods.go:61] "kube-proxy-d29zg" [cbf2dba9-1c85-4e21-bf0b-01cf3fcd00df] Running
	I1210 01:08:30.988503  132693 system_pods.go:61] "kube-scheduler-embed-certs-274758" [6ecaa7c9-f7b6-450d-941c-8ccf582af275] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 01:08:30.988516  132693 system_pods.go:61] "metrics-server-6867b74b74-mhxtf" [2874a85a-c957-4056-b60e-be170f3c1ab2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:08:30.988527  132693 system_pods.go:61] "storage-provisioner" [7e2b93e2-0f25-4bb1-bca6-02a8ea5336ed] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 01:08:30.988539  132693 system_pods.go:74] duration metric: took 14.779044ms to wait for pod list to return data ...
	I1210 01:08:30.988567  132693 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:08:30.993600  132693 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:08:30.993632  132693 node_conditions.go:123] node cpu capacity is 2
	I1210 01:08:30.993652  132693 node_conditions.go:105] duration metric: took 5.074866ms to run NodePressure ...
	I1210 01:08:30.993680  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:31.251140  132693 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 01:08:31.254339  132693 kubeadm.go:739] kubelet initialised
	I1210 01:08:31.254358  132693 kubeadm.go:740] duration metric: took 3.193934ms waiting for restarted kubelet to initialise ...
	I1210 01:08:31.254367  132693 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:08:31.259628  132693 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.264379  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.264406  132693 pod_ready.go:82] duration metric: took 4.746678ms for pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.264417  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.264434  132693 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.268773  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "etcd-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.268794  132693 pod_ready.go:82] duration metric: took 4.345772ms for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.268804  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "etcd-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.268812  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.272890  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.272911  132693 pod_ready.go:82] duration metric: took 4.087379ms for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.272921  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.272929  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.377990  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.378020  132693 pod_ready.go:82] duration metric: took 105.077792ms for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.378033  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.378041  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-d29zg" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.777563  132693 pod_ready.go:93] pod "kube-proxy-d29zg" in "kube-system" namespace has status "Ready":"True"
	I1210 01:08:31.777584  132693 pod_ready.go:82] duration metric: took 399.533068ms for pod "kube-proxy-d29zg" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.777598  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.623742  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:31.624253  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:31.624289  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:31.624189  134112 retry.go:31] will retry after 3.852910592s: waiting for machine to come up
	I1210 01:08:36.766538  133282 start.go:364] duration metric: took 3m54.06534367s to acquireMachinesLock for "default-k8s-diff-port-901295"
	I1210 01:08:36.766623  133282 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:08:36.766636  133282 fix.go:54] fixHost starting: 
	I1210 01:08:36.767069  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:08:36.767139  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:08:36.785475  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41443
	I1210 01:08:36.786023  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:08:36.786614  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:08:36.786640  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:08:36.786956  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:08:36.787147  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:36.787295  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:08:36.788719  133282 fix.go:112] recreateIfNeeded on default-k8s-diff-port-901295: state=Stopped err=<nil>
	I1210 01:08:36.788745  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	W1210 01:08:36.788889  133282 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:08:36.791479  133282 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-901295" ...
	I1210 01:08:33.784092  132693 pod_ready.go:103] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:35.784732  132693 pod_ready.go:103] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:36.792712  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Start
	I1210 01:08:36.792883  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Ensuring networks are active...
	I1210 01:08:36.793559  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Ensuring network default is active
	I1210 01:08:36.793891  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Ensuring network mk-default-k8s-diff-port-901295 is active
	I1210 01:08:36.794354  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Getting domain xml...
	I1210 01:08:36.795038  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Creating domain...
	I1210 01:08:35.480373  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.480901  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has current primary IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.480926  133241 main.go:141] libmachine: (old-k8s-version-094470) Found IP for machine: 192.168.61.11
	I1210 01:08:35.480955  133241 main.go:141] libmachine: (old-k8s-version-094470) Reserving static IP address...
	I1210 01:08:35.481323  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "old-k8s-version-094470", mac: "52:54:00:00:f3:52", ip: "192.168.61.11"} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.481352  133241 main.go:141] libmachine: (old-k8s-version-094470) Reserved static IP address: 192.168.61.11
	I1210 01:08:35.481370  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | skip adding static IP to network mk-old-k8s-version-094470 - found existing host DHCP lease matching {name: "old-k8s-version-094470", mac: "52:54:00:00:f3:52", ip: "192.168.61.11"}
	I1210 01:08:35.481392  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | Getting to WaitForSSH function...
	I1210 01:08:35.481408  133241 main.go:141] libmachine: (old-k8s-version-094470) Waiting for SSH to be available...
	I1210 01:08:35.483785  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.484269  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.484314  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.484458  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | Using SSH client type: external
	I1210 01:08:35.484493  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa (-rw-------)
	I1210 01:08:35.484526  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:08:35.484548  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | About to run SSH command:
	I1210 01:08:35.484557  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | exit 0
	I1210 01:08:35.610216  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | SSH cmd err, output: <nil>: 
	I1210 01:08:35.610554  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetConfigRaw
	I1210 01:08:35.611179  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:35.613811  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.614184  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.614221  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.614448  133241 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/config.json ...
	I1210 01:08:35.614659  133241 machine.go:93] provisionDockerMachine start ...
	I1210 01:08:35.614681  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:35.614861  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.616965  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.617478  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.617507  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.617606  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:35.617741  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.617880  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.617993  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:35.618166  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:35.618416  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:35.618431  133241 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:08:35.730293  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:08:35.730326  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 01:08:35.730614  133241 buildroot.go:166] provisioning hostname "old-k8s-version-094470"
	I1210 01:08:35.730647  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 01:08:35.730902  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.733604  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.733943  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.733963  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.734110  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:35.734290  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.734436  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.734589  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:35.734737  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:35.734921  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:35.734937  133241 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-094470 && echo "old-k8s-version-094470" | sudo tee /etc/hostname
	I1210 01:08:35.856219  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-094470
	
	I1210 01:08:35.856272  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.859777  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.860157  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.860194  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.860364  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:35.860590  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.860808  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.860948  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:35.861145  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:35.861370  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:35.861391  133241 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-094470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-094470/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-094470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:08:35.984487  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:08:35.984523  133241 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:08:35.984571  133241 buildroot.go:174] setting up certificates
	I1210 01:08:35.984585  133241 provision.go:84] configureAuth start
	I1210 01:08:35.984596  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 01:08:35.984888  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:35.987515  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.987891  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.987920  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.988078  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.990428  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.990806  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.990838  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.991028  133241 provision.go:143] copyHostCerts
	I1210 01:08:35.991108  133241 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:08:35.991125  133241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:08:35.991208  133241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:08:35.991378  133241 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:08:35.991396  133241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:08:35.991436  133241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:08:35.991548  133241 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:08:35.991560  133241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:08:35.991593  133241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:08:35.991684  133241 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-094470 san=[127.0.0.1 192.168.61.11 localhost minikube old-k8s-version-094470]
	I1210 01:08:36.166767  133241 provision.go:177] copyRemoteCerts
	I1210 01:08:36.166825  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:08:36.166872  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.169777  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.170166  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.170196  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.170452  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.170662  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.170837  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.170985  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.255600  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:08:36.277974  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1210 01:08:36.299608  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 01:08:36.320325  133241 provision.go:87] duration metric: took 335.730286ms to configureAuth
	I1210 01:08:36.320346  133241 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:08:36.320502  133241 config.go:182] Loaded profile config "old-k8s-version-094470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1210 01:08:36.320572  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.323358  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.323810  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.323836  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.324012  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.324213  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.324351  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.324479  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.324608  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:36.324773  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:36.324789  133241 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:08:36.538020  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:08:36.538052  133241 machine.go:96] duration metric: took 923.37742ms to provisionDockerMachine
	I1210 01:08:36.538065  133241 start.go:293] postStartSetup for "old-k8s-version-094470" (driver="kvm2")
	I1210 01:08:36.538075  133241 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:08:36.538092  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.538437  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:08:36.538473  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.540836  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.541187  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.541229  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.541400  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.541594  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.541728  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.541852  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.623740  133241 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:08:36.627323  133241 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:08:36.627343  133241 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:08:36.627405  133241 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:08:36.627487  133241 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:08:36.627568  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:08:36.635720  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:36.656793  133241 start.go:296] duration metric: took 118.715633ms for postStartSetup
	I1210 01:08:36.656832  133241 fix.go:56] duration metric: took 19.077955657s for fixHost
	I1210 01:08:36.656853  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.659288  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.659586  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.659618  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.659772  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.659961  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.660132  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.660250  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.660391  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:36.660552  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:36.660562  133241 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:08:36.766355  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792916.738645658
	
	I1210 01:08:36.766375  133241 fix.go:216] guest clock: 1733792916.738645658
	I1210 01:08:36.766382  133241 fix.go:229] Guest: 2024-12-10 01:08:36.738645658 +0000 UTC Remote: 2024-12-10 01:08:36.656836618 +0000 UTC m=+237.074026661 (delta=81.80904ms)
	I1210 01:08:36.766420  133241 fix.go:200] guest clock delta is within tolerance: 81.80904ms
	I1210 01:08:36.766429  133241 start.go:83] releasing machines lock for "old-k8s-version-094470", held for 19.187587757s
	I1210 01:08:36.766461  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.766761  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:36.769758  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.770129  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.770150  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.770309  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.770818  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.770992  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.771090  133241 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:08:36.771157  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.771182  133241 ssh_runner.go:195] Run: cat /version.json
	I1210 01:08:36.771203  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.773923  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774103  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774272  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.774292  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774434  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.774545  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.774585  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774616  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.774728  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.774817  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.774843  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.774975  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.775004  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.775148  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.875634  133241 ssh_runner.go:195] Run: systemctl --version
	I1210 01:08:36.880774  133241 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:08:37.023282  133241 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:08:37.029380  133241 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:08:37.029436  133241 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:08:37.044071  133241 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:08:37.044093  133241 start.go:495] detecting cgroup driver to use...
	I1210 01:08:37.044157  133241 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:08:37.058626  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:08:37.070607  133241 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:08:37.070659  133241 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:08:37.086913  133241 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:08:37.102676  133241 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:08:37.221862  133241 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:08:37.373086  133241 docker.go:233] disabling docker service ...
	I1210 01:08:37.373166  133241 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:08:37.386711  133241 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:08:37.399414  133241 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:08:37.546237  133241 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:08:37.660681  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:08:37.673736  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:08:37.690107  133241 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1210 01:08:37.690180  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.700871  133241 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:08:37.700920  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.711545  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.722078  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.732603  133241 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:08:37.743617  133241 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:08:37.753641  133241 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:08:37.753699  133241 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:08:37.765737  133241 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:08:37.774173  133241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:37.891188  133241 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:08:37.983170  133241 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:08:37.983248  133241 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:08:37.987987  133241 start.go:563] Will wait 60s for crictl version
	I1210 01:08:37.988049  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:37.993150  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:08:38.045191  133241 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:08:38.045281  133241 ssh_runner.go:195] Run: crio --version
	I1210 01:08:38.071768  133241 ssh_runner.go:195] Run: crio --version
	I1210 01:08:38.100869  133241 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1210 01:08:38.102141  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:38.104790  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:38.105112  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:38.105143  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:38.105337  133241 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1210 01:08:38.109454  133241 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:38.120925  133241 kubeadm.go:883] updating cluster {Name:old-k8s-version-094470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:08:38.121060  133241 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1210 01:08:38.121130  133241 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:38.169400  133241 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1210 01:08:38.169462  133241 ssh_runner.go:195] Run: which lz4
	I1210 01:08:38.172973  133241 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 01:08:38.176684  133241 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 01:08:38.176715  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1210 01:08:38.285566  132693 pod_ready.go:103] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:38.784437  132693 pod_ready.go:93] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:08:38.784470  132693 pod_ready.go:82] duration metric: took 7.006865777s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:38.784480  132693 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:40.791489  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:38.076463  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting to get IP...
	I1210 01:08:38.077256  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.077650  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.077706  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:38.077616  134254 retry.go:31] will retry after 287.089061ms: waiting for machine to come up
	I1210 01:08:38.366347  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.366906  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.366937  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:38.366866  134254 retry.go:31] will retry after 359.654145ms: waiting for machine to come up
	I1210 01:08:38.728592  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.729111  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.729144  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:38.729048  134254 retry.go:31] will retry after 299.617496ms: waiting for machine to come up
	I1210 01:08:39.030785  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.031359  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.031382  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:39.031312  134254 retry.go:31] will retry after 586.950887ms: waiting for machine to come up
	I1210 01:08:39.620247  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.620872  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.620903  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:39.620802  134254 retry.go:31] will retry after 623.103267ms: waiting for machine to come up
	I1210 01:08:40.245322  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.245640  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.245669  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:40.245600  134254 retry.go:31] will retry after 712.603102ms: waiting for machine to come up
	I1210 01:08:40.960316  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.960862  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.960892  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:40.960806  134254 retry.go:31] will retry after 999.356089ms: waiting for machine to come up
	I1210 01:08:41.961395  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:41.961902  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:41.961929  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:41.961862  134254 retry.go:31] will retry after 1.050049361s: waiting for machine to come up
	I1210 01:08:39.654620  133241 crio.go:462] duration metric: took 1.481673499s to copy over tarball
	I1210 01:08:39.654705  133241 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 01:08:42.473447  133241 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.818699717s)
	I1210 01:08:42.473486  133241 crio.go:469] duration metric: took 2.818833041s to extract the tarball
	I1210 01:08:42.473496  133241 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 01:08:42.514635  133241 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:42.546161  133241 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1210 01:08:42.546204  133241 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 01:08:42.546276  133241 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:08:42.546315  133241 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.546339  133241 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.546344  133241 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.546347  133241 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.546306  133241 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1210 01:08:42.546372  133241 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.546315  133241 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.548134  133241 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.548150  133241 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1210 01:08:42.548149  133241 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.548162  133241 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:08:42.548134  133241 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.548135  133241 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.548138  133241 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.548326  133241 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.700402  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.706096  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.716669  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.717025  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.723380  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.727890  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.740867  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1210 01:08:42.775300  133241 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1210 01:08:42.775345  133241 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.775393  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.827802  133241 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1210 01:08:42.827855  133241 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.827873  133241 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1210 01:08:42.827906  133241 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.827936  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.827953  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.851952  133241 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1210 01:08:42.851998  133241 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.852063  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872369  133241 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1210 01:08:42.872408  133241 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.872446  133241 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1210 01:08:42.872479  133241 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1210 01:08:42.872489  133241 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.872497  133241 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1210 01:08:42.872516  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.872535  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872535  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872458  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872578  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.872638  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.872672  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.952963  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.952964  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 01:08:42.956464  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.956535  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.956580  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.956614  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.956681  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:43.035636  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 01:08:43.086938  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:43.087032  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:43.104765  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:43.104844  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:43.104891  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 01:08:43.109871  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:43.122137  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 01:08:43.193838  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:43.256301  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:43.256342  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1210 01:08:43.256431  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1210 01:08:43.258819  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1210 01:08:43.258928  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1210 01:08:43.259011  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1210 01:08:43.281411  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1210 01:08:43.300319  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1210 01:08:43.334327  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:08:43.478183  133241 cache_images.go:92] duration metric: took 931.957836ms to LoadCachedImages
	W1210 01:08:43.478292  133241 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1210 01:08:43.478310  133241 kubeadm.go:934] updating node { 192.168.61.11 8443 v1.20.0 crio true true} ...
	I1210 01:08:43.478501  133241 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-094470 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:08:43.478610  133241 ssh_runner.go:195] Run: crio config
	I1210 01:08:43.523627  133241 cni.go:84] Creating CNI manager for ""
	I1210 01:08:43.523651  133241 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:08:43.523660  133241 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:08:43.523680  133241 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.11 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-094470 NodeName:old-k8s-version-094470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1210 01:08:43.523872  133241 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-094470"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:08:43.523947  133241 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1210 01:08:43.534926  133241 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:08:43.535015  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:08:43.544420  133241 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1210 01:08:43.561582  133241 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:08:43.578427  133241 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1210 01:08:43.595593  133241 ssh_runner.go:195] Run: grep 192.168.61.11	control-plane.minikube.internal$ /etc/hosts
	I1210 01:08:43.599137  133241 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:43.610483  133241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:43.750543  133241 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:08:43.766573  133241 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470 for IP: 192.168.61.11
	I1210 01:08:43.766599  133241 certs.go:194] generating shared ca certs ...
	I1210 01:08:43.766628  133241 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:08:43.766828  133241 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:08:43.766881  133241 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:08:43.766897  133241 certs.go:256] generating profile certs ...
	I1210 01:08:43.767022  133241 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.key
	I1210 01:08:43.767097  133241 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.key.11e7a196
	I1210 01:08:43.767158  133241 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.key
	I1210 01:08:43.767318  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:08:43.767359  133241 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:08:43.767391  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:08:43.767428  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:08:43.767461  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:08:43.767502  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:08:43.767554  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:43.768599  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:08:43.825215  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:08:43.852218  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:08:43.888256  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:08:43.921633  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1210 01:08:43.954815  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 01:08:43.986660  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:08:44.009065  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 01:08:44.030476  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:08:44.053232  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:08:44.078371  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:08:44.100076  133241 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:08:44.115731  133241 ssh_runner.go:195] Run: openssl version
	I1210 01:08:44.121192  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:08:44.130554  133241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:08:44.134639  133241 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:08:44.134697  133241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:08:44.140323  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:08:44.150593  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:08:44.160638  133241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:44.165053  133241 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:44.165121  133241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:44.170391  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:08:44.180113  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:08:44.189938  133241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:08:44.193880  133241 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:08:44.193931  133241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:08:44.199419  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:08:44.209346  133241 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:08:44.213474  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:08:44.218965  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:08:44.224344  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:08:44.229835  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:08:44.235365  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:08:44.240697  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:08:44.245999  133241 kubeadm.go:392] StartCluster: {Name:old-k8s-version-094470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:08:44.246102  133241 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:08:44.246164  133241 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:44.287050  133241 cri.go:89] found id: ""
	I1210 01:08:44.287167  133241 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:08:44.297028  133241 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:08:44.297044  133241 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:08:44.297092  133241 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:08:44.306118  133241 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:08:44.307143  133241 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-094470" does not appear in /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:08:44.307777  133241 kubeconfig.go:62] /home/jenkins/minikube-integration/20062-79135/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-094470" cluster setting kubeconfig missing "old-k8s-version-094470" context setting]
	I1210 01:08:44.308663  133241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:08:44.394164  133241 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:08:44.406683  133241 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.11
	I1210 01:08:44.406723  133241 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:08:44.406739  133241 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:08:44.406799  133241 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:44.444917  133241 cri.go:89] found id: ""
	I1210 01:08:44.444995  133241 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:08:44.465693  133241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:08:44.475399  133241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:08:44.475424  133241 kubeadm.go:157] found existing configuration files:
	
	I1210 01:08:44.475482  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:08:44.483802  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:08:44.483844  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:08:44.492395  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:08:44.501080  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:08:44.501141  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:08:44.509973  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:08:44.518103  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:08:44.518176  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:08:44.527145  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:08:44.535124  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:08:44.535179  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:08:44.543773  133241 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:08:44.552533  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:42.791894  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:45.934242  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:43.013971  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:43.014430  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:43.014467  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:43.014369  134254 retry.go:31] will retry after 1.273602138s: waiting for machine to come up
	I1210 01:08:44.289131  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:44.289686  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:44.289720  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:44.289616  134254 retry.go:31] will retry after 1.911761795s: waiting for machine to come up
	I1210 01:08:46.203851  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:46.204263  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:46.204321  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:46.204199  134254 retry.go:31] will retry after 2.653257729s: waiting for machine to come up
	I1210 01:08:44.667527  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.368529  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.572674  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.671006  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.759483  133241 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:08:45.759588  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:46.260599  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:46.759851  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:47.260403  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:47.760555  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:48.259665  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:48.760390  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:49.260450  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:48.292324  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:50.789665  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:48.859690  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:48.860078  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:48.860108  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:48.860029  134254 retry.go:31] will retry after 3.186060231s: waiting for machine to come up
	I1210 01:08:52.048071  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:52.048524  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:52.048554  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:52.048478  134254 retry.go:31] will retry after 2.823038983s: waiting for machine to come up
	I1210 01:08:49.759795  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:50.260493  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:50.760146  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:51.259783  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:51.760554  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:52.260543  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:52.760452  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:53.260523  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:53.759677  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:54.259750  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:56.158844  132605 start.go:364] duration metric: took 51.38781342s to acquireMachinesLock for "no-preload-584179"
	I1210 01:08:56.158913  132605 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:08:56.158923  132605 fix.go:54] fixHost starting: 
	I1210 01:08:56.159339  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:08:56.159381  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:08:56.178552  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34315
	I1210 01:08:56.178997  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:08:56.179471  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:08:56.179497  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:08:56.179803  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:08:56.179977  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:08:56.180119  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:08:56.181496  132605 fix.go:112] recreateIfNeeded on no-preload-584179: state=Stopped err=<nil>
	I1210 01:08:56.181521  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	W1210 01:08:56.181661  132605 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:08:56.183508  132605 out.go:177] * Restarting existing kvm2 VM for "no-preload-584179" ...
	I1210 01:08:52.790210  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:54.790515  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:56.184725  132605 main.go:141] libmachine: (no-preload-584179) Calling .Start
	I1210 01:08:56.184883  132605 main.go:141] libmachine: (no-preload-584179) Ensuring networks are active...
	I1210 01:08:56.185680  132605 main.go:141] libmachine: (no-preload-584179) Ensuring network default is active
	I1210 01:08:56.186043  132605 main.go:141] libmachine: (no-preload-584179) Ensuring network mk-no-preload-584179 is active
	I1210 01:08:56.186427  132605 main.go:141] libmachine: (no-preload-584179) Getting domain xml...
	I1210 01:08:56.187126  132605 main.go:141] libmachine: (no-preload-584179) Creating domain...
	I1210 01:08:54.875474  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.875880  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has current primary IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.875902  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Found IP for machine: 192.168.39.193
	I1210 01:08:54.875918  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Reserving static IP address...
	I1210 01:08:54.876379  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-901295", mac: "52:54:00:f7:2f:3d", ip: "192.168.39.193"} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:54.876411  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Reserved static IP address: 192.168.39.193
	I1210 01:08:54.876434  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | skip adding static IP to network mk-default-k8s-diff-port-901295 - found existing host DHCP lease matching {name: "default-k8s-diff-port-901295", mac: "52:54:00:f7:2f:3d", ip: "192.168.39.193"}
	I1210 01:08:54.876456  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Getting to WaitForSSH function...
	I1210 01:08:54.876473  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for SSH to be available...
	I1210 01:08:54.878454  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.878758  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:54.878787  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.878940  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Using SSH client type: external
	I1210 01:08:54.878969  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa (-rw-------)
	I1210 01:08:54.878993  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.193 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:08:54.879003  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | About to run SSH command:
	I1210 01:08:54.879011  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | exit 0
	I1210 01:08:55.006046  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | SSH cmd err, output: <nil>: 
	I1210 01:08:55.006394  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetConfigRaw
	I1210 01:08:55.007100  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:55.009429  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.009753  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.009803  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.010054  133282 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/config.json ...
	I1210 01:08:55.010278  133282 machine.go:93] provisionDockerMachine start ...
	I1210 01:08:55.010302  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:55.010513  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.012899  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.013198  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.013248  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.013340  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.013509  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.013643  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.013726  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.013879  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.014070  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.014081  133282 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:08:55.126262  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:08:55.126294  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetMachineName
	I1210 01:08:55.126547  133282 buildroot.go:166] provisioning hostname "default-k8s-diff-port-901295"
	I1210 01:08:55.126592  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetMachineName
	I1210 01:08:55.126756  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.129397  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.129769  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.129798  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.129921  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.130071  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.130187  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.130279  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.130380  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.130545  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.130572  133282 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-901295 && echo "default-k8s-diff-port-901295" | sudo tee /etc/hostname
	I1210 01:08:55.256829  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-901295
	
	I1210 01:08:55.256857  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.259599  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.259977  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.260006  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.260257  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.260456  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.260645  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.260795  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.260996  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.261212  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.261239  133282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-901295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-901295/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-901295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:08:55.387808  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:08:55.387837  133282 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:08:55.387872  133282 buildroot.go:174] setting up certificates
	I1210 01:08:55.387883  133282 provision.go:84] configureAuth start
	I1210 01:08:55.387897  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetMachineName
	I1210 01:08:55.388193  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:55.391297  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.391649  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.391683  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.391799  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.393859  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.394150  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.394176  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.394272  133282 provision.go:143] copyHostCerts
	I1210 01:08:55.394336  133282 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:08:55.394353  133282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:08:55.394411  133282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:08:55.394501  133282 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:08:55.394508  133282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:08:55.394530  133282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:08:55.394615  133282 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:08:55.394624  133282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:08:55.394643  133282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:08:55.394693  133282 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-901295 san=[127.0.0.1 192.168.39.193 default-k8s-diff-port-901295 localhost minikube]
	I1210 01:08:55.502253  133282 provision.go:177] copyRemoteCerts
	I1210 01:08:55.502313  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:08:55.502341  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.504919  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.505216  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.505252  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.505425  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.505613  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.505749  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.505932  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:55.593242  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:08:55.616378  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 01:08:55.638786  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 01:08:55.660268  133282 provision.go:87] duration metric: took 272.369019ms to configureAuth
	I1210 01:08:55.660293  133282 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:08:55.660506  133282 config.go:182] Loaded profile config "default-k8s-diff-port-901295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:08:55.660597  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.662964  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.663283  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.663312  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.663461  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.663656  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.663820  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.663944  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.664091  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.664330  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.664354  133282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:08:55.918356  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:08:55.918389  133282 machine.go:96] duration metric: took 908.095325ms to provisionDockerMachine
	I1210 01:08:55.918402  133282 start.go:293] postStartSetup for "default-k8s-diff-port-901295" (driver="kvm2")
	I1210 01:08:55.918415  133282 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:08:55.918450  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:55.918790  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:08:55.918823  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.921575  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.921897  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.921929  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.922026  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.922205  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.922375  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.922485  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:56.008442  133282 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:08:56.012149  133282 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:08:56.012165  133282 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:08:56.012239  133282 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:08:56.012325  133282 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:08:56.012428  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:08:56.021144  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:56.042869  133282 start.go:296] duration metric: took 124.452091ms for postStartSetup
	I1210 01:08:56.042914  133282 fix.go:56] duration metric: took 19.276278483s for fixHost
	I1210 01:08:56.042940  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:56.045280  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.045612  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.045644  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.045845  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:56.046002  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.046123  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.046224  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:56.046353  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:56.046530  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:56.046541  133282 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:08:56.158690  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792936.125620375
	
	I1210 01:08:56.158714  133282 fix.go:216] guest clock: 1733792936.125620375
	I1210 01:08:56.158722  133282 fix.go:229] Guest: 2024-12-10 01:08:56.125620375 +0000 UTC Remote: 2024-12-10 01:08:56.042918319 +0000 UTC m=+253.475376365 (delta=82.702056ms)
	I1210 01:08:56.158741  133282 fix.go:200] guest clock delta is within tolerance: 82.702056ms
	I1210 01:08:56.158746  133282 start.go:83] releasing machines lock for "default-k8s-diff-port-901295", held for 19.392149024s
	I1210 01:08:56.158769  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.159017  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:56.161998  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.162321  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.162350  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.162541  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.163022  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.163197  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.163296  133282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:08:56.163346  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:56.163449  133282 ssh_runner.go:195] Run: cat /version.json
	I1210 01:08:56.163481  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:56.166071  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.166443  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.166475  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.166500  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.166750  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:56.166897  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.166920  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.166929  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.167083  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:56.167089  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:56.167255  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.167258  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:56.167400  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:56.167529  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:56.273144  133282 ssh_runner.go:195] Run: systemctl --version
	I1210 01:08:56.278678  133282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:08:56.423921  133282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:08:56.429467  133282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:08:56.429537  133282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:08:56.443900  133282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:08:56.443927  133282 start.go:495] detecting cgroup driver to use...
	I1210 01:08:56.443996  133282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:08:56.458653  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:08:56.471717  133282 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:08:56.471798  133282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:08:56.483960  133282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:08:56.495903  133282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:08:56.604493  133282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:08:56.741771  133282 docker.go:233] disabling docker service ...
	I1210 01:08:56.741846  133282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:08:56.755264  133282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:08:56.767590  133282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:08:56.922151  133282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:08:57.045410  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:08:57.061217  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:08:57.079488  133282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 01:08:57.079552  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.090356  133282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:08:57.090434  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.100784  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.111326  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.120417  133282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:08:57.129871  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.140489  133282 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.157524  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.167947  133282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:08:57.176904  133282 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:08:57.176947  133282 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:08:57.188925  133282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:08:57.197558  133282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:57.319427  133282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:08:57.419493  133282 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:08:57.419570  133282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:08:57.424302  133282 start.go:563] Will wait 60s for crictl version
	I1210 01:08:57.424362  133282 ssh_runner.go:195] Run: which crictl
	I1210 01:08:57.428067  133282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:08:57.468247  133282 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:08:57.468319  133282 ssh_runner.go:195] Run: crio --version
	I1210 01:08:57.497834  133282 ssh_runner.go:195] Run: crio --version
	I1210 01:08:57.527032  133282 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 01:08:57.528284  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:57.531510  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:57.531882  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:57.531908  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:57.532178  133282 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 01:08:57.536149  133282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:57.548081  133282 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:08:57.548221  133282 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:08:57.548283  133282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:57.585539  133282 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 01:08:57.585619  133282 ssh_runner.go:195] Run: which lz4
	I1210 01:08:57.590131  133282 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 01:08:57.595506  133282 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 01:08:57.595534  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1210 01:08:54.760444  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:55.259774  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:55.759929  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:56.260379  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:56.759985  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:57.260495  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:57.759699  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:58.260475  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:58.759732  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:59.260424  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:57.291502  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:59.792026  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:01.793182  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:57.453911  132605 main.go:141] libmachine: (no-preload-584179) Waiting to get IP...
	I1210 01:08:57.455000  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:57.455393  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:57.455472  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:57.455384  134419 retry.go:31] will retry after 189.932045ms: waiting for machine to come up
	I1210 01:08:57.646978  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:57.647486  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:57.647520  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:57.647418  134419 retry.go:31] will retry after 278.873511ms: waiting for machine to come up
	I1210 01:08:57.928222  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:57.928797  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:57.928837  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:57.928738  134419 retry.go:31] will retry after 468.940412ms: waiting for machine to come up
	I1210 01:08:58.399469  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:58.400105  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:58.400131  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:58.400041  134419 retry.go:31] will retry after 459.796386ms: waiting for machine to come up
	I1210 01:08:58.861581  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:58.862042  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:58.862075  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:58.861985  134419 retry.go:31] will retry after 493.349488ms: waiting for machine to come up
	I1210 01:08:59.356810  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:59.357338  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:59.357365  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:59.357314  134419 retry.go:31] will retry after 736.790492ms: waiting for machine to come up
	I1210 01:09:00.095779  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:00.096246  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:00.096281  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:00.096182  134419 retry.go:31] will retry after 1.059095907s: waiting for machine to come up
	I1210 01:09:01.157286  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:01.157718  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:01.157759  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:01.157656  134419 retry.go:31] will retry after 1.18137171s: waiting for machine to come up
	I1210 01:08:58.835009  133282 crio.go:462] duration metric: took 1.24490918s to copy over tarball
	I1210 01:08:58.835108  133282 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 01:09:00.985062  133282 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.149905713s)
	I1210 01:09:00.985097  133282 crio.go:469] duration metric: took 2.150055868s to extract the tarball
	I1210 01:09:00.985108  133282 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 01:09:01.032869  133282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:09:01.074578  133282 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 01:09:01.074609  133282 cache_images.go:84] Images are preloaded, skipping loading
	I1210 01:09:01.074618  133282 kubeadm.go:934] updating node { 192.168.39.193 8444 v1.31.2 crio true true} ...
	I1210 01:09:01.074727  133282 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-901295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:09:01.074794  133282 ssh_runner.go:195] Run: crio config
	I1210 01:09:01.133905  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:09:01.133943  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:01.133965  133282 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:09:01.133999  133282 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.193 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-901295 NodeName:default-k8s-diff-port-901295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 01:09:01.134201  133282 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.193
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-901295"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.193"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.193"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:09:01.134264  133282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 01:09:01.147844  133282 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:09:01.147931  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:09:01.160432  133282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 01:09:01.180526  133282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:09:01.200698  133282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1210 01:09:01.216799  133282 ssh_runner.go:195] Run: grep 192.168.39.193	control-plane.minikube.internal$ /etc/hosts
	I1210 01:09:01.220381  133282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.193	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:09:01.233079  133282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:01.361483  133282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:09:01.380679  133282 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295 for IP: 192.168.39.193
	I1210 01:09:01.380702  133282 certs.go:194] generating shared ca certs ...
	I1210 01:09:01.380722  133282 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:01.380921  133282 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:09:01.380994  133282 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:09:01.381010  133282 certs.go:256] generating profile certs ...
	I1210 01:09:01.381136  133282 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/client.key
	I1210 01:09:01.381229  133282 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/apiserver.key.b900309b
	I1210 01:09:01.381286  133282 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/proxy-client.key
	I1210 01:09:01.381437  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:09:01.381489  133282 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:09:01.381500  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:09:01.381537  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:09:01.381568  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:09:01.381598  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:09:01.381658  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:09:01.382643  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:09:01.437062  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:09:01.472383  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:09:01.503832  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:09:01.532159  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 01:09:01.555926  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 01:09:01.578213  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:09:01.599047  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 01:09:01.620628  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:09:01.643326  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:09:01.665846  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:09:01.688854  133282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:09:01.706519  133282 ssh_runner.go:195] Run: openssl version
	I1210 01:09:01.712053  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:09:01.722297  133282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:09:01.726404  133282 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:09:01.726491  133282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:09:01.731901  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:09:01.745040  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:09:01.758663  133282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:01.763894  133282 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:01.763945  133282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:01.771019  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:09:01.781071  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:09:01.790898  133282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:09:01.795494  133282 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:09:01.795557  133282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:09:01.800996  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:09:01.811221  133282 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:09:01.815412  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:09:01.821621  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:09:01.829028  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:09:01.838361  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:09:01.844663  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:09:01.850154  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:09:01.855539  133282 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:09:01.855625  133282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:09:01.855663  133282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:01.898021  133282 cri.go:89] found id: ""
	I1210 01:09:01.898095  133282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:09:01.908929  133282 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:09:01.908947  133282 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:09:01.909005  133282 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:09:01.917830  133282 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:09:01.918982  133282 kubeconfig.go:125] found "default-k8s-diff-port-901295" server: "https://192.168.39.193:8444"
	I1210 01:09:01.921394  133282 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:09:01.930263  133282 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.193
	I1210 01:09:01.930291  133282 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:09:01.930304  133282 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:09:01.930352  133282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:01.966094  133282 cri.go:89] found id: ""
	I1210 01:09:01.966195  133282 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:09:01.983212  133282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:09:01.991944  133282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:09:01.991963  133282 kubeadm.go:157] found existing configuration files:
	
	I1210 01:09:01.992011  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 01:09:02.000043  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:09:02.000094  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:09:02.008538  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 01:09:02.016658  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:09:02.016718  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:09:02.025191  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 01:09:02.033198  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:09:02.033235  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:09:02.041713  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 01:09:02.049752  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:09:02.049801  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:09:02.058162  133282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:09:02.067001  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:02.178210  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:59.760246  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:00.260582  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:00.760701  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:01.259686  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:01.759889  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:02.260232  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:02.759769  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:03.259935  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:03.760670  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.260443  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.289731  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:06.291608  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:02.340685  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:02.341201  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:02.341233  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:02.341148  134419 retry.go:31] will retry after 1.149002375s: waiting for machine to come up
	I1210 01:09:03.491439  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:03.491777  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:03.491803  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:03.491742  134419 retry.go:31] will retry after 2.260301884s: waiting for machine to come up
	I1210 01:09:05.753701  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:05.754207  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:05.754245  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:05.754151  134419 retry.go:31] will retry after 2.19021466s: waiting for machine to come up
	I1210 01:09:03.022068  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:03.230465  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:03.288423  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:03.380544  133282 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:09:03.380653  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:03.881388  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.381638  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.881652  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.380981  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.394784  133282 api_server.go:72] duration metric: took 2.014238708s to wait for apiserver process to appear ...
	I1210 01:09:05.394817  133282 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:09:05.394854  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:07.865790  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:07.865818  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:07.865831  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:07.881775  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:07.881807  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:07.894896  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:07.914874  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:07.914905  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:08.395143  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:08.404338  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:08.404370  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:08.895743  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:08.906401  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:08.906439  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:09.394905  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:09.400326  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 200:
	ok
	I1210 01:09:09.411040  133282 api_server.go:141] control plane version: v1.31.2
	I1210 01:09:09.411080  133282 api_server.go:131] duration metric: took 4.016246339s to wait for apiserver health ...
	I1210 01:09:09.411090  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:09:09.411096  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:09.412738  133282 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:09:04.760421  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.260154  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.760313  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:06.259902  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:06.760365  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:07.260060  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:07.759720  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:08.260052  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:08.759734  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:09.260736  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:08.291848  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:10.790539  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:07.946992  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:07.947528  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:07.947561  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:07.947474  134419 retry.go:31] will retry after 3.212306699s: waiting for machine to come up
	I1210 01:09:11.163716  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:11.164132  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:11.164163  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:11.164092  134419 retry.go:31] will retry after 3.275164589s: waiting for machine to come up
	I1210 01:09:09.413907  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:09:09.423631  133282 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:09:09.440030  133282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:09:09.449054  133282 system_pods.go:59] 8 kube-system pods found
	I1210 01:09:09.449081  133282 system_pods.go:61] "coredns-7c65d6cfc9-qbdpj" [eec04b43-145a-4cae-9085-185b573be507] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 01:09:09.449088  133282 system_pods.go:61] "etcd-default-k8s-diff-port-901295" [c8c570b0-2e66-4cf5-bed6-20ee655ad679] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 01:09:09.449100  133282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-901295" [42b2ad48-8b92-4ba4-8a14-6c3e6bdec4a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 01:09:09.449116  133282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-901295" [bd2c0e9d-cb31-46a5-b12e-ab70ed05c8e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 01:09:09.449127  133282 system_pods.go:61] "kube-proxy-5szz9" [957bab4d-6329-41b4-9980-aaa17133201e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 01:09:09.449135  133282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-901295" [1729b062-1bfe-447f-b9ed-29813c7f056a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 01:09:09.449144  133282 system_pods.go:61] "metrics-server-6867b74b74-zpj2g" [cdfb5b8e-5b7f-4fc8-8ad8-07ea92f7f737] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:09:09.449150  133282 system_pods.go:61] "storage-provisioner" [342f814b-f510-4a3b-b27d-52ebbdf85275] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 01:09:09.449159  133282 system_pods.go:74] duration metric: took 9.110007ms to wait for pod list to return data ...
	I1210 01:09:09.449168  133282 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:09:09.452778  133282 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:09:09.452806  133282 node_conditions.go:123] node cpu capacity is 2
	I1210 01:09:09.452818  133282 node_conditions.go:105] duration metric: took 3.643268ms to run NodePressure ...
	I1210 01:09:09.452837  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:09.728171  133282 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 01:09:09.732074  133282 kubeadm.go:739] kubelet initialised
	I1210 01:09:09.732096  133282 kubeadm.go:740] duration metric: took 3.900542ms waiting for restarted kubelet to initialise ...
	I1210 01:09:09.732106  133282 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:09.736406  133282 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.740516  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.740534  133282 pod_ready.go:82] duration metric: took 4.104848ms for pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.740543  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.740549  133282 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.744293  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.744311  133282 pod_ready.go:82] duration metric: took 3.755781ms for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.744321  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.744326  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.748023  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.748045  133282 pod_ready.go:82] duration metric: took 3.712559ms for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.748062  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.748070  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.843581  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.843607  133282 pod_ready.go:82] duration metric: took 95.52817ms for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.843621  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.843632  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5szz9" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:10.242986  133282 pod_ready.go:93] pod "kube-proxy-5szz9" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:10.243015  133282 pod_ready.go:82] duration metric: took 399.37468ms for pod "kube-proxy-5szz9" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:10.243025  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:12.249815  133282 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:09.760075  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:10.259970  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:10.760547  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:11.259999  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:11.760315  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:12.260121  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:12.760217  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:13.259996  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:13.760635  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:14.259738  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:13.290686  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:15.792057  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:14.440802  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.441315  132605 main.go:141] libmachine: (no-preload-584179) Found IP for machine: 192.168.50.169
	I1210 01:09:14.441338  132605 main.go:141] libmachine: (no-preload-584179) Reserving static IP address...
	I1210 01:09:14.441355  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has current primary IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.441776  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "no-preload-584179", mac: "52:54:00:94:5e:a7", ip: "192.168.50.169"} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.441830  132605 main.go:141] libmachine: (no-preload-584179) DBG | skip adding static IP to network mk-no-preload-584179 - found existing host DHCP lease matching {name: "no-preload-584179", mac: "52:54:00:94:5e:a7", ip: "192.168.50.169"}
	I1210 01:09:14.441847  132605 main.go:141] libmachine: (no-preload-584179) Reserved static IP address: 192.168.50.169
	I1210 01:09:14.441867  132605 main.go:141] libmachine: (no-preload-584179) Waiting for SSH to be available...
	I1210 01:09:14.441882  132605 main.go:141] libmachine: (no-preload-584179) DBG | Getting to WaitForSSH function...
	I1210 01:09:14.444063  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.444360  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.444397  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.444510  132605 main.go:141] libmachine: (no-preload-584179) DBG | Using SSH client type: external
	I1210 01:09:14.444531  132605 main.go:141] libmachine: (no-preload-584179) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa (-rw-------)
	I1210 01:09:14.444565  132605 main.go:141] libmachine: (no-preload-584179) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:09:14.444579  132605 main.go:141] libmachine: (no-preload-584179) DBG | About to run SSH command:
	I1210 01:09:14.444594  132605 main.go:141] libmachine: (no-preload-584179) DBG | exit 0
	I1210 01:09:14.571597  132605 main.go:141] libmachine: (no-preload-584179) DBG | SSH cmd err, output: <nil>: 
	I1210 01:09:14.571997  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetConfigRaw
	I1210 01:09:14.572831  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:14.576075  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.576525  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.576559  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.576843  132605 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/config.json ...
	I1210 01:09:14.577023  132605 machine.go:93] provisionDockerMachine start ...
	I1210 01:09:14.577043  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:14.577263  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.579535  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.579894  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.579925  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.580191  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:14.580426  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.580579  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.580742  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:14.580901  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:14.581081  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:14.581092  132605 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:09:14.699453  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:09:14.699485  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:09:14.699734  132605 buildroot.go:166] provisioning hostname "no-preload-584179"
	I1210 01:09:14.699766  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:09:14.700011  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.703169  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.703570  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.703597  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.703742  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:14.703967  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.704170  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.704395  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:14.704582  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:14.704802  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:14.704825  132605 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-584179 && echo "no-preload-584179" | sudo tee /etc/hostname
	I1210 01:09:14.836216  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-584179
	
	I1210 01:09:14.836259  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.839077  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.839502  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.839536  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.839752  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:14.839958  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.840127  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.840304  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:14.840534  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:14.840766  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:14.840793  132605 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-584179' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-584179/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-584179' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:09:14.965138  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:09:14.965175  132605 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:09:14.965246  132605 buildroot.go:174] setting up certificates
	I1210 01:09:14.965268  132605 provision.go:84] configureAuth start
	I1210 01:09:14.965287  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:09:14.965570  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:14.968666  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.969081  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.969116  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.969264  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.971772  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.972144  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.972169  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.972337  132605 provision.go:143] copyHostCerts
	I1210 01:09:14.972403  132605 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:09:14.972428  132605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:09:14.972492  132605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:09:14.972648  132605 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:09:14.972663  132605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:09:14.972698  132605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:09:14.972790  132605 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:09:14.972803  132605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:09:14.972836  132605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:09:14.972915  132605 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.no-preload-584179 san=[127.0.0.1 192.168.50.169 localhost minikube no-preload-584179]
	I1210 01:09:15.113000  132605 provision.go:177] copyRemoteCerts
	I1210 01:09:15.113067  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:09:15.113100  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.115838  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.116216  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.116243  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.116422  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.116590  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.116726  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.116820  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.199896  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:09:15.225440  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 01:09:15.250028  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 01:09:15.274086  132605 provision.go:87] duration metric: took 308.801497ms to configureAuth
	I1210 01:09:15.274127  132605 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:09:15.274298  132605 config.go:182] Loaded profile config "no-preload-584179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:09:15.274390  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.277149  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.277509  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.277539  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.277682  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.277842  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.277999  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.278110  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.278260  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:15.278438  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:15.278454  132605 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:09:15.504997  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:09:15.505080  132605 machine.go:96] duration metric: took 928.040946ms to provisionDockerMachine
	I1210 01:09:15.505103  132605 start.go:293] postStartSetup for "no-preload-584179" (driver="kvm2")
	I1210 01:09:15.505118  132605 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:09:15.505150  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.505498  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:09:15.505532  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.508802  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.509247  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.509324  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.509448  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.509674  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.509840  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.509985  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.597115  132605 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:09:15.602107  132605 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:09:15.602135  132605 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:09:15.602226  132605 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:09:15.602330  132605 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:09:15.602453  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:09:15.611320  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:09:15.633173  132605 start.go:296] duration metric: took 128.055577ms for postStartSetup
	I1210 01:09:15.633214  132605 fix.go:56] duration metric: took 19.474291224s for fixHost
	I1210 01:09:15.633234  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.635888  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.636254  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.636298  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.636472  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.636655  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.636827  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.636941  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.637115  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:15.637284  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:15.637295  132605 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:09:15.746834  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792955.705138377
	
	I1210 01:09:15.746862  132605 fix.go:216] guest clock: 1733792955.705138377
	I1210 01:09:15.746873  132605 fix.go:229] Guest: 2024-12-10 01:09:15.705138377 +0000 UTC Remote: 2024-12-10 01:09:15.6332178 +0000 UTC m=+353.450037611 (delta=71.920577ms)
	I1210 01:09:15.746899  132605 fix.go:200] guest clock delta is within tolerance: 71.920577ms
	I1210 01:09:15.746915  132605 start.go:83] releasing machines lock for "no-preload-584179", held for 19.588029336s
	I1210 01:09:15.746945  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.747285  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:15.750451  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.750900  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.750929  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.751162  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.751698  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.751882  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.751964  132605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:09:15.752035  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.752082  132605 ssh_runner.go:195] Run: cat /version.json
	I1210 01:09:15.752104  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.754825  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755065  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755249  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.755269  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755457  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.755549  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.755585  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755624  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.755718  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.755807  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.755929  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.755997  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.756266  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.756431  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.834820  132605 ssh_runner.go:195] Run: systemctl --version
	I1210 01:09:15.859263  132605 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:09:16.006149  132605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:09:16.012040  132605 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:09:16.012116  132605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:09:16.026410  132605 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:09:16.026435  132605 start.go:495] detecting cgroup driver to use...
	I1210 01:09:16.026508  132605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:09:16.040833  132605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:09:16.053355  132605 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:09:16.053404  132605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:09:16.066169  132605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:09:16.078906  132605 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:09:16.183645  132605 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:09:16.338131  132605 docker.go:233] disabling docker service ...
	I1210 01:09:16.338210  132605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:09:16.353706  132605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:09:16.367025  132605 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:09:16.490857  132605 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:09:16.599213  132605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:09:16.612423  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:09:16.628989  132605 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 01:09:16.629051  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.638381  132605 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:09:16.638443  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.648140  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.657702  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.667303  132605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:09:16.677058  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.686261  132605 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.701267  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.710630  132605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:09:16.719338  132605 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:09:16.719399  132605 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:09:16.730675  132605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:09:16.739704  132605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:16.855267  132605 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:09:16.945551  132605 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:09:16.945636  132605 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:09:16.950041  132605 start.go:563] Will wait 60s for crictl version
	I1210 01:09:16.950089  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:16.953415  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:09:16.986363  132605 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:09:16.986452  132605 ssh_runner.go:195] Run: crio --version
	I1210 01:09:17.013313  132605 ssh_runner.go:195] Run: crio --version
	I1210 01:09:17.040732  132605 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 01:09:17.042078  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:17.044697  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:17.044992  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:17.045017  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:17.045180  132605 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1210 01:09:17.048776  132605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:09:17.059862  132605 kubeadm.go:883] updating cluster {Name:no-preload-584179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-584179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:09:17.059969  132605 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:09:17.060002  132605 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:09:17.092954  132605 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 01:09:17.092981  132605 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 01:09:17.093021  132605 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:17.093063  132605 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.093076  132605 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.093096  132605 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1210 01:09:17.093157  132605 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.093084  132605 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.093235  132605 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.093250  132605 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.094742  132605 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1210 01:09:17.094787  132605 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.094804  132605 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.094742  132605 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.094810  132605 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:17.094753  132605 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.094820  132605 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.094765  132605 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:14.765671  133282 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:15.750454  133282 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:15.750473  133282 pod_ready.go:82] duration metric: took 5.507439947s for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:15.750486  133282 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:14.759976  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:15.259717  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:15.760410  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:16.260034  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:16.759708  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:17.260433  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:17.760687  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:18.260284  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:18.760557  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:19.260362  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:18.290233  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:20.291198  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:17.246846  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.248658  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.250095  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.254067  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.256089  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.278344  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1210 01:09:17.278473  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.369439  132605 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1210 01:09:17.369501  132605 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.369501  132605 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1210 01:09:17.369540  132605 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.369553  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.369604  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.417953  132605 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1210 01:09:17.418006  132605 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.418052  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.423233  132605 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1210 01:09:17.423274  132605 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1210 01:09:17.423281  132605 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.423306  132605 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.423326  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.423352  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.423429  132605 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1210 01:09:17.423469  132605 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.423503  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.505918  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.505973  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.505933  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.506033  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.506057  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.506093  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.622808  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.635839  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.637443  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.637478  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.637587  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.637611  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.688747  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.768097  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.768175  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.768211  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.768320  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.768313  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.805141  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1210 01:09:17.805252  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1210 01:09:17.885468  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1210 01:09:17.885628  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1210 01:09:17.893263  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1210 01:09:17.893312  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1210 01:09:17.893335  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1210 01:09:17.893381  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 01:09:17.893399  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1210 01:09:17.893411  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 01:09:17.893417  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 01:09:17.893464  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1210 01:09:17.893479  132605 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1210 01:09:17.893454  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 01:09:17.893518  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1210 01:09:17.895148  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1210 01:09:18.009923  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:21.497870  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.604325674s)
	I1210 01:09:21.497908  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1210 01:09:21.497931  132605 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1210 01:09:21.497925  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (3.604515411s)
	I1210 01:09:21.497964  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2: (3.604523853s)
	I1210 01:09:21.497980  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1210 01:09:21.497988  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1210 01:09:21.497968  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1210 01:09:21.498030  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (3.604504871s)
	I1210 01:09:21.498065  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1210 01:09:21.498092  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (3.604626001s)
	I1210 01:09:21.498135  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1210 01:09:21.498137  132605 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.48818734s)
	I1210 01:09:21.498180  132605 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 01:09:21.498210  132605 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:21.498262  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.758044  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:20.257446  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:19.759901  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:20.260224  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:20.760523  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:21.259846  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:21.759997  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:22.259939  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:22.760414  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:23.260359  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:23.760075  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:24.260519  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:22.291428  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:24.291612  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:26.791400  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:23.369885  132605 ssh_runner.go:235] Completed: which crictl: (1.871582184s)
	I1210 01:09:23.369947  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.871938064s)
	I1210 01:09:23.369967  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1210 01:09:23.369976  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:23.370000  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 01:09:23.370042  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 01:09:25.661942  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.291860829s)
	I1210 01:09:25.661984  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1210 01:09:25.661990  132605 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.291995779s)
	I1210 01:09:25.662011  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 01:09:25.662066  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 01:09:25.662066  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:27.025354  132605 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.36318975s)
	I1210 01:09:27.025446  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:27.025517  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.363423006s)
	I1210 01:09:27.025546  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1210 01:09:27.025604  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 01:09:27.025677  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 01:09:27.063571  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 01:09:27.063700  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 01:09:22.757215  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:24.757584  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:27.256535  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:24.760537  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:25.259994  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:25.760205  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:26.260504  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:26.759648  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:27.259995  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:27.760383  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:28.259992  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:28.760004  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:29.260496  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:28.813963  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:30.837175  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:29.106253  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.080542846s)
	I1210 01:09:29.106295  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1210 01:09:29.106312  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.042586527s)
	I1210 01:09:29.106326  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 01:09:29.106345  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1210 01:09:29.106392  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 01:09:30.968622  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.862203504s)
	I1210 01:09:30.968650  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1210 01:09:30.968679  132605 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 01:09:30.968732  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 01:09:31.612519  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 01:09:31.612559  132605 cache_images.go:123] Successfully loaded all cached images
	I1210 01:09:31.612564  132605 cache_images.go:92] duration metric: took 14.519573158s to LoadCachedImages
	I1210 01:09:31.612577  132605 kubeadm.go:934] updating node { 192.168.50.169 8443 v1.31.2 crio true true} ...
	I1210 01:09:31.612686  132605 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-584179 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-584179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:09:31.612750  132605 ssh_runner.go:195] Run: crio config
	I1210 01:09:31.661155  132605 cni.go:84] Creating CNI manager for ""
	I1210 01:09:31.661185  132605 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:31.661199  132605 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:09:31.661228  132605 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.169 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-584179 NodeName:no-preload-584179 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.169"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.169 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 01:09:31.661406  132605 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.169
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-584179"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.169"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.169"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:09:31.661511  132605 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 01:09:31.671185  132605 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:09:31.671259  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:09:31.679776  132605 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1210 01:09:31.694290  132605 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:09:31.708644  132605 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1210 01:09:31.725292  132605 ssh_runner.go:195] Run: grep 192.168.50.169	control-plane.minikube.internal$ /etc/hosts
	I1210 01:09:31.729070  132605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.169	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:09:31.740077  132605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:31.857074  132605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:09:31.872257  132605 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179 for IP: 192.168.50.169
	I1210 01:09:31.872280  132605 certs.go:194] generating shared ca certs ...
	I1210 01:09:31.872314  132605 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:31.872515  132605 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:09:31.872569  132605 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:09:31.872579  132605 certs.go:256] generating profile certs ...
	I1210 01:09:31.872694  132605 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/client.key
	I1210 01:09:31.872775  132605 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/apiserver.key.0a939830
	I1210 01:09:31.872828  132605 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/proxy-client.key
	I1210 01:09:31.872979  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:09:31.873020  132605 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:09:31.873034  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:09:31.873069  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:09:31.873098  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:09:31.873127  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:09:31.873188  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:09:31.874099  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:09:31.906792  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:09:31.939994  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:09:31.965628  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:09:31.992020  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 01:09:32.015601  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 01:09:32.048113  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:09:32.069416  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 01:09:32.090144  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:09:32.111484  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:09:32.135390  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:09:32.157978  132605 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:09:32.173851  132605 ssh_runner.go:195] Run: openssl version
	I1210 01:09:32.179068  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:09:32.188602  132605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:09:32.192585  132605 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:09:32.192629  132605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:09:32.197637  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:09:32.207401  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:09:32.216700  132605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:29.756368  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:31.756948  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:29.760244  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:30.260534  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:30.760426  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:31.259767  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:31.759951  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:32.259919  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:32.760161  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:33.260272  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:33.759885  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:34.259970  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:33.290818  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:35.790889  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:32.220620  132605 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:32.220663  132605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:32.225661  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:09:32.235325  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:09:32.244746  132605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:09:32.248733  132605 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:09:32.248774  132605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:09:32.254022  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:09:32.264208  132605 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:09:32.268332  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:09:32.273902  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:09:32.279525  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:09:32.284958  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:09:32.291412  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:09:32.296527  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:09:32.302123  132605 kubeadm.go:392] StartCluster: {Name:no-preload-584179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-584179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:09:32.302233  132605 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:09:32.302293  132605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:32.345135  132605 cri.go:89] found id: ""
	I1210 01:09:32.345212  132605 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:09:32.355077  132605 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:09:32.355093  132605 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:09:32.355131  132605 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:09:32.364021  132605 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:09:32.365012  132605 kubeconfig.go:125] found "no-preload-584179" server: "https://192.168.50.169:8443"
	I1210 01:09:32.367348  132605 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:09:32.375938  132605 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.169
	I1210 01:09:32.375967  132605 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:09:32.375979  132605 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:09:32.376032  132605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:32.408948  132605 cri.go:89] found id: ""
	I1210 01:09:32.409014  132605 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:09:32.427628  132605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:09:32.437321  132605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:09:32.437348  132605 kubeadm.go:157] found existing configuration files:
	
	I1210 01:09:32.437391  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:09:32.446114  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:09:32.446155  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:09:32.455531  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:09:32.465558  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:09:32.465611  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:09:32.475265  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:09:32.483703  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:09:32.483750  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:09:32.492041  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:09:32.499895  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:09:32.499948  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:09:32.508205  132605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:09:32.516625  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:32.628252  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:33.675979  132605 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.04768244s)
	I1210 01:09:33.676029  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:33.873465  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:33.951722  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:34.064512  132605 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:09:34.064627  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:34.565753  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.065163  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.104915  132605 api_server.go:72] duration metric: took 1.040405424s to wait for apiserver process to appear ...
	I1210 01:09:35.104944  132605 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:09:35.104970  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:35.105426  132605 api_server.go:269] stopped: https://192.168.50.169:8443/healthz: Get "https://192.168.50.169:8443/healthz": dial tcp 192.168.50.169:8443: connect: connection refused
	I1210 01:09:35.606063  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:34.256982  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:36.756184  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:38.326687  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:38.326719  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:38.326736  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:38.400207  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:38.400236  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:38.605572  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:38.610811  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:38.610849  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:39.105424  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:39.117268  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:39.117303  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:39.605417  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:39.614444  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 200:
	ok
	I1210 01:09:39.620993  132605 api_server.go:141] control plane version: v1.31.2
	I1210 01:09:39.621020  132605 api_server.go:131] duration metric: took 4.51606815s to wait for apiserver health ...
	I1210 01:09:39.621032  132605 cni.go:84] Creating CNI manager for ""
	I1210 01:09:39.621041  132605 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:34.759835  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.260276  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.759791  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:36.259684  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:36.760649  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:37.259922  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:37.760558  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:38.260712  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:38.759679  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:39.259678  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:39.622539  132605 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:09:39.623685  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:09:39.643844  132605 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:09:39.678622  132605 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:09:39.692082  132605 system_pods.go:59] 8 kube-system pods found
	I1210 01:09:39.692124  132605 system_pods.go:61] "coredns-7c65d6cfc9-hhsm5" [dddb227a-7c16-4acd-be5f-1ab38b78129c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 01:09:39.692133  132605 system_pods.go:61] "etcd-no-preload-584179" [acba2a13-196f-4ff9-8151-6b88578d532d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 01:09:39.692141  132605 system_pods.go:61] "kube-apiserver-no-preload-584179" [20a67076-4ff2-4f31-b245-bf4079cd11d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 01:09:39.692149  132605 system_pods.go:61] "kube-controller-manager-no-preload-584179" [d7a2e531-1609-4c8a-b756-d9819339ed27] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 01:09:39.692154  132605 system_pods.go:61] "kube-proxy-xcjs2" [ec6cf5b1-3ea9-4868-874d-61e262cca0c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 01:09:39.692162  132605 system_pods.go:61] "kube-scheduler-no-preload-584179" [998543da-3056-4960-8762-9ab3dbd1925a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 01:09:39.692174  132605 system_pods.go:61] "metrics-server-6867b74b74-lwgxd" [0e7f1063-8508-4f5b-b8ff-bbd387a53919] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:09:39.692183  132605 system_pods.go:61] "storage-provisioner" [31180637-f48e-4dda-8ec3-56155bb300cf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 01:09:39.692200  132605 system_pods.go:74] duration metric: took 13.548523ms to wait for pod list to return data ...
	I1210 01:09:39.692214  132605 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:09:39.696707  132605 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:09:39.696740  132605 node_conditions.go:123] node cpu capacity is 2
	I1210 01:09:39.696754  132605 node_conditions.go:105] duration metric: took 4.534393ms to run NodePressure ...
	I1210 01:09:39.696781  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:39.977595  132605 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 01:09:39.981694  132605 kubeadm.go:739] kubelet initialised
	I1210 01:09:39.981714  132605 kubeadm.go:740] duration metric: took 4.094235ms waiting for restarted kubelet to initialise ...
	I1210 01:09:39.981724  132605 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:39.987484  132605 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:39.992414  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.992434  132605 pod_ready.go:82] duration metric: took 4.925954ms for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:39.992442  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.992448  132605 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:39.996262  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "etcd-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.996291  132605 pod_ready.go:82] duration metric: took 3.826925ms for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:39.996301  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "etcd-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.996309  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.000642  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-apiserver-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.000659  132605 pod_ready.go:82] duration metric: took 4.340955ms for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.000668  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-apiserver-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.000676  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.082165  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.082191  132605 pod_ready.go:82] duration metric: took 81.505218ms for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.082204  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.082214  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.483273  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-proxy-xcjs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.483306  132605 pod_ready.go:82] duration metric: took 401.082947ms for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.483318  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-proxy-xcjs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.483329  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.882587  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-scheduler-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.882617  132605 pod_ready.go:82] duration metric: took 399.278598ms for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.882629  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-scheduler-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.882641  132605 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:41.281474  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:41.281502  132605 pod_ready.go:82] duration metric: took 398.850415ms for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:41.281516  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:41.281526  132605 pod_ready.go:39] duration metric: took 1.299793175s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:41.281547  132605 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 01:09:41.293293  132605 ops.go:34] apiserver oom_adj: -16
	I1210 01:09:41.293310  132605 kubeadm.go:597] duration metric: took 8.938211553s to restartPrimaryControlPlane
	I1210 01:09:41.293318  132605 kubeadm.go:394] duration metric: took 8.991203373s to StartCluster
	I1210 01:09:41.293334  132605 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:41.293389  132605 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:09:41.295054  132605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:41.295293  132605 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 01:09:41.295376  132605 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 01:09:41.295496  132605 addons.go:69] Setting storage-provisioner=true in profile "no-preload-584179"
	I1210 01:09:41.295519  132605 addons.go:234] Setting addon storage-provisioner=true in "no-preload-584179"
	W1210 01:09:41.295529  132605 addons.go:243] addon storage-provisioner should already be in state true
	I1210 01:09:41.295527  132605 config.go:182] Loaded profile config "no-preload-584179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:09:41.295581  132605 host.go:66] Checking if "no-preload-584179" exists ...
	I1210 01:09:41.295588  132605 addons.go:69] Setting metrics-server=true in profile "no-preload-584179"
	I1210 01:09:41.295602  132605 addons.go:234] Setting addon metrics-server=true in "no-preload-584179"
	I1210 01:09:41.295604  132605 addons.go:69] Setting default-storageclass=true in profile "no-preload-584179"
	W1210 01:09:41.295615  132605 addons.go:243] addon metrics-server should already be in state true
	I1210 01:09:41.295627  132605 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-584179"
	I1210 01:09:41.295643  132605 host.go:66] Checking if "no-preload-584179" exists ...
	I1210 01:09:41.295906  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.295951  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.296035  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.296052  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.296089  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.296134  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.296994  132605 out.go:177] * Verifying Kubernetes components...
	I1210 01:09:41.298351  132605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:41.312841  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32935
	I1210 01:09:41.313326  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.313883  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.313906  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.314202  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.314798  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.314846  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.316718  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45347
	I1210 01:09:41.317263  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.317829  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.317857  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.318269  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.318870  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.318916  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.329929  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45969
	I1210 01:09:41.330341  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.330879  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.330894  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.331331  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.331505  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.332041  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43911
	I1210 01:09:41.332457  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.333084  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.333107  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.333516  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.333728  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.335268  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42073
	I1210 01:09:41.336123  132605 addons.go:234] Setting addon default-storageclass=true in "no-preload-584179"
	W1210 01:09:41.336137  132605 addons.go:243] addon default-storageclass should already be in state true
	I1210 01:09:41.336161  132605 host.go:66] Checking if "no-preload-584179" exists ...
	I1210 01:09:41.336395  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.336422  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.336596  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.336686  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:41.337074  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.337088  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.337468  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.337656  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.338494  132605 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:41.339130  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:41.339843  132605 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:09:41.339856  132605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 01:09:41.339870  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:41.341253  132605 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 01:09:37.793895  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:40.291282  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:41.342436  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.342604  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 01:09:41.342620  132605 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 01:09:41.342633  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:41.342844  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:41.342861  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.343122  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:41.343399  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:41.343569  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:41.343683  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:41.345344  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.345814  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:41.345834  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.345982  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:41.346159  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:41.346293  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:41.346431  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:41.352593  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38283
	I1210 01:09:41.352930  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.353292  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.353307  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.353545  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.354016  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.354045  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.370168  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46765
	I1210 01:09:41.370736  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.371289  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.371315  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.371670  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.371879  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.373679  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:41.374802  132605 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 01:09:41.374821  132605 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 01:09:41.374841  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:41.377611  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.378065  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:41.378089  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.378261  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:41.378411  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:41.378571  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:41.378711  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:41.492956  132605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:09:41.510713  132605 node_ready.go:35] waiting up to 6m0s for node "no-preload-584179" to be "Ready" ...
	I1210 01:09:41.591523  132605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:09:41.612369  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 01:09:41.612393  132605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 01:09:41.641040  132605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 01:09:41.672955  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 01:09:41.672982  132605 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 01:09:41.720885  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:09:41.720921  132605 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 01:09:41.773885  132605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:09:39.256804  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:41.758321  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:42.945125  132605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.304042618s)
	I1210 01:09:42.945192  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945207  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945233  132605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.171304002s)
	I1210 01:09:42.945292  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945310  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945452  132605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.353900883s)
	I1210 01:09:42.945476  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945488  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945543  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.945556  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.945587  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.945601  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.945609  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945616  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945819  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.945847  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.945832  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.945856  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945863  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945897  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.945907  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.945916  132605 addons.go:475] Verifying addon metrics-server=true in "no-preload-584179"
	I1210 01:09:42.945926  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.946083  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.946115  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.946120  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.946659  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.946679  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.946690  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.946699  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.946960  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.946976  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.954783  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.954805  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.955037  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.955056  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.955101  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.956592  132605 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1210 01:09:39.759613  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:40.260466  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:40.760527  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:41.260450  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:41.759950  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:42.260075  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:42.760661  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:43.259780  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:43.759690  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:44.260376  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:42.791249  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:45.290804  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:42.957891  132605 addons.go:510] duration metric: took 1.66252058s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I1210 01:09:43.514278  132605 node_ready.go:53] node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:45.514855  132605 node_ready.go:53] node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:44.256730  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:46.257699  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:44.759802  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:45.260533  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:45.760410  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:45.760500  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:45.797499  133241 cri.go:89] found id: ""
	I1210 01:09:45.797522  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.797533  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:45.797539  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:45.797596  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:45.827841  133241 cri.go:89] found id: ""
	I1210 01:09:45.827872  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.827885  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:45.827893  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:45.827952  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:45.861227  133241 cri.go:89] found id: ""
	I1210 01:09:45.861251  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.861259  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:45.861264  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:45.861331  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:45.895142  133241 cri.go:89] found id: ""
	I1210 01:09:45.895174  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.895185  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:45.895191  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:45.895266  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:45.931113  133241 cri.go:89] found id: ""
	I1210 01:09:45.931146  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.931157  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:45.931164  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:45.931251  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:45.964348  133241 cri.go:89] found id: ""
	I1210 01:09:45.964388  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.964396  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:45.964402  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:45.964453  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:45.997808  133241 cri.go:89] found id: ""
	I1210 01:09:45.997829  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.997837  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:45.997842  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:45.997888  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:46.028464  133241 cri.go:89] found id: ""
	I1210 01:09:46.028490  133241 logs.go:282] 0 containers: []
	W1210 01:09:46.028499  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:46.028508  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:46.028524  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:46.136225  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:46.136257  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:46.136275  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:46.211654  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:46.211686  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:46.254008  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:46.254046  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:46.305985  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:46.306020  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:48.818889  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:48.831511  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:48.831575  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:48.863536  133241 cri.go:89] found id: ""
	I1210 01:09:48.863566  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.863577  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:48.863585  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:48.863642  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:48.895340  133241 cri.go:89] found id: ""
	I1210 01:09:48.895362  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.895371  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:48.895378  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:48.895439  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:48.930962  133241 cri.go:89] found id: ""
	I1210 01:09:48.930989  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.930997  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:48.931003  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:48.931060  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:48.966437  133241 cri.go:89] found id: ""
	I1210 01:09:48.966467  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.966479  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:48.966488  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:48.966553  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:49.001290  133241 cri.go:89] found id: ""
	I1210 01:09:49.001321  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.001333  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:49.001340  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:49.001404  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:49.036472  133241 cri.go:89] found id: ""
	I1210 01:09:49.036499  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.036510  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:49.036532  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:49.036609  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:49.066550  133241 cri.go:89] found id: ""
	I1210 01:09:49.066589  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.066600  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:49.066607  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:49.066669  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:49.097358  133241 cri.go:89] found id: ""
	I1210 01:09:49.097383  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.097392  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:49.097402  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:49.097413  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:49.170082  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:49.170116  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:49.209684  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:49.209747  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:49.268714  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:49.268755  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:49.281979  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:49.282014  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:49.350901  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:47.790228  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:49.791158  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:48.014087  132605 node_ready.go:53] node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:49.014932  132605 node_ready.go:49] node "no-preload-584179" has status "Ready":"True"
	I1210 01:09:49.014960  132605 node_ready.go:38] duration metric: took 7.504211405s for node "no-preload-584179" to be "Ready" ...
	I1210 01:09:49.014974  132605 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:49.020519  132605 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:49.025466  132605 pod_ready.go:93] pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:49.025489  132605 pod_ready.go:82] duration metric: took 4.945455ms for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:49.025501  132605 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.031580  132605 pod_ready.go:103] pod "etcd-no-preload-584179" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:51.532544  132605 pod_ready.go:93] pod "etcd-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.532570  132605 pod_ready.go:82] duration metric: took 2.507060173s for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.532582  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.537498  132605 pod_ready.go:93] pod "kube-apiserver-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.537516  132605 pod_ready.go:82] duration metric: took 4.927374ms for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.537525  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.542147  132605 pod_ready.go:93] pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.542161  132605 pod_ready.go:82] duration metric: took 4.630752ms for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.542169  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.546645  132605 pod_ready.go:93] pod "kube-proxy-xcjs2" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.546660  132605 pod_ready.go:82] duration metric: took 4.486291ms for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.546667  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.815308  132605 pod_ready.go:93] pod "kube-scheduler-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.815333  132605 pod_ready.go:82] duration metric: took 268.661005ms for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.815343  132605 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:48.756571  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:51.256434  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:51.851559  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:51.864804  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:51.864862  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:51.907102  133241 cri.go:89] found id: ""
	I1210 01:09:51.907141  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.907154  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:51.907162  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:51.907218  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:51.937672  133241 cri.go:89] found id: ""
	I1210 01:09:51.937695  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.937702  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:51.937708  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:51.937755  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:51.966886  133241 cri.go:89] found id: ""
	I1210 01:09:51.966911  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.966919  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:51.966925  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:51.966981  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:51.996806  133241 cri.go:89] found id: ""
	I1210 01:09:51.996830  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.996838  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:51.996844  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:51.996901  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:52.028041  133241 cri.go:89] found id: ""
	I1210 01:09:52.028083  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.028091  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:52.028097  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:52.028150  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:52.057921  133241 cri.go:89] found id: ""
	I1210 01:09:52.057946  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.057954  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:52.057960  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:52.058010  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:52.088367  133241 cri.go:89] found id: ""
	I1210 01:09:52.088406  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.088415  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:52.088422  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:52.088487  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:52.117636  133241 cri.go:89] found id: ""
	I1210 01:09:52.117667  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.117679  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:52.117691  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:52.117705  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:52.151628  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:52.151655  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:52.202083  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:52.202116  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:52.214973  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:52.215009  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:52.282101  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:52.282126  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:52.282139  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:52.290617  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:54.790008  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:56.790504  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:53.820512  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:55.824852  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:53.258005  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:55.755992  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:54.862326  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:54.874349  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:54.874418  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:54.906983  133241 cri.go:89] found id: ""
	I1210 01:09:54.907006  133241 logs.go:282] 0 containers: []
	W1210 01:09:54.907013  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:54.907019  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:54.907069  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:54.938187  133241 cri.go:89] found id: ""
	I1210 01:09:54.938213  133241 logs.go:282] 0 containers: []
	W1210 01:09:54.938221  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:54.938226  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:54.938290  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:54.974481  133241 cri.go:89] found id: ""
	I1210 01:09:54.974514  133241 logs.go:282] 0 containers: []
	W1210 01:09:54.974526  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:54.974534  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:54.974619  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:55.005904  133241 cri.go:89] found id: ""
	I1210 01:09:55.005928  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.005941  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:55.005949  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:55.006015  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:55.037698  133241 cri.go:89] found id: ""
	I1210 01:09:55.037729  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.037741  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:55.037748  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:55.037816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:55.067926  133241 cri.go:89] found id: ""
	I1210 01:09:55.067958  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.067966  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:55.067971  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:55.068016  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:55.098309  133241 cri.go:89] found id: ""
	I1210 01:09:55.098333  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.098341  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:55.098349  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:55.098400  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:55.145177  133241 cri.go:89] found id: ""
	I1210 01:09:55.145212  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.145221  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:55.145231  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:55.145243  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:55.193307  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:55.193338  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:55.205536  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:55.205558  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:55.271248  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:55.271276  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:55.271295  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:55.349465  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:55.349503  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:57.887749  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:57.899698  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:57.899765  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:57.933170  133241 cri.go:89] found id: ""
	I1210 01:09:57.933196  133241 logs.go:282] 0 containers: []
	W1210 01:09:57.933206  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:57.933214  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:57.933282  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:57.964237  133241 cri.go:89] found id: ""
	I1210 01:09:57.964271  133241 logs.go:282] 0 containers: []
	W1210 01:09:57.964284  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:57.964292  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:57.964360  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:57.996447  133241 cri.go:89] found id: ""
	I1210 01:09:57.996481  133241 logs.go:282] 0 containers: []
	W1210 01:09:57.996493  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:57.996501  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:57.996562  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:58.030007  133241 cri.go:89] found id: ""
	I1210 01:09:58.030034  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.030046  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:58.030054  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:58.030120  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:58.063634  133241 cri.go:89] found id: ""
	I1210 01:09:58.063667  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.063678  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:58.063686  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:58.063748  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:58.095076  133241 cri.go:89] found id: ""
	I1210 01:09:58.095105  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.095114  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:58.095120  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:58.095177  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:58.127107  133241 cri.go:89] found id: ""
	I1210 01:09:58.127147  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.127160  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:58.127169  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:58.127243  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:58.158137  133241 cri.go:89] found id: ""
	I1210 01:09:58.158167  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.158177  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:58.158190  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:58.158213  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:58.209195  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:58.209236  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:58.221816  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:58.221841  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:58.290396  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:58.290416  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:58.290430  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:58.370235  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:58.370265  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:58.791561  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:01.290503  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:58.321571  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:00.322349  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:58.256526  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:00.756754  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:00.908076  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:00.920898  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:00.920985  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:00.955432  133241 cri.go:89] found id: ""
	I1210 01:10:00.955469  133241 logs.go:282] 0 containers: []
	W1210 01:10:00.955481  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:00.955490  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:00.955550  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:00.987580  133241 cri.go:89] found id: ""
	I1210 01:10:00.987606  133241 logs.go:282] 0 containers: []
	W1210 01:10:00.987615  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:00.987621  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:00.987670  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:01.018741  133241 cri.go:89] found id: ""
	I1210 01:10:01.018766  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.018773  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:01.018781  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:01.018840  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:01.049817  133241 cri.go:89] found id: ""
	I1210 01:10:01.049849  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.049860  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:01.049879  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:01.049946  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:01.081736  133241 cri.go:89] found id: ""
	I1210 01:10:01.081765  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.081775  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:01.081781  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:01.081829  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:01.110990  133241 cri.go:89] found id: ""
	I1210 01:10:01.111015  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.111026  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:01.111034  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:01.111096  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:01.140737  133241 cri.go:89] found id: ""
	I1210 01:10:01.140767  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.140777  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:01.140785  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:01.140848  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:01.170628  133241 cri.go:89] found id: ""
	I1210 01:10:01.170662  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.170674  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:01.170686  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:01.170701  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:01.222358  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:01.222389  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:01.235640  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:01.235668  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:01.302726  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:01.302745  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:01.302762  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:01.383817  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:01.383855  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:03.921112  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:03.933517  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:03.933592  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:03.967318  133241 cri.go:89] found id: ""
	I1210 01:10:03.967344  133241 logs.go:282] 0 containers: []
	W1210 01:10:03.967353  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:03.967358  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:03.967411  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:03.998743  133241 cri.go:89] found id: ""
	I1210 01:10:03.998768  133241 logs.go:282] 0 containers: []
	W1210 01:10:03.998776  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:03.998782  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:03.998842  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:04.033209  133241 cri.go:89] found id: ""
	I1210 01:10:04.033235  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.033247  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:04.033255  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:04.033319  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:04.064815  133241 cri.go:89] found id: ""
	I1210 01:10:04.064845  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.064857  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:04.064864  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:04.064921  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:04.098676  133241 cri.go:89] found id: ""
	I1210 01:10:04.098699  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.098707  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:04.098712  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:04.098763  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:04.129693  133241 cri.go:89] found id: ""
	I1210 01:10:04.129720  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.129732  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:04.129741  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:04.129809  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:04.162158  133241 cri.go:89] found id: ""
	I1210 01:10:04.162195  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.162203  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:04.162209  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:04.162276  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:04.194376  133241 cri.go:89] found id: ""
	I1210 01:10:04.194425  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.194436  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:04.194446  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:04.194462  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:04.246674  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:04.246702  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:04.259142  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:04.259169  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:04.330034  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:04.330054  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:04.330067  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:04.410042  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:04.410089  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:03.790690  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:06.290723  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:02.821628  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:04.822691  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:06.823821  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:03.256410  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:05.756520  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:06.948623  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:06.960727  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:06.960811  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:06.993176  133241 cri.go:89] found id: ""
	I1210 01:10:06.993217  133241 logs.go:282] 0 containers: []
	W1210 01:10:06.993226  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:06.993231  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:06.993285  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:07.026420  133241 cri.go:89] found id: ""
	I1210 01:10:07.026449  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.026462  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:07.026469  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:07.026541  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:07.060810  133241 cri.go:89] found id: ""
	I1210 01:10:07.060837  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.060847  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:07.060855  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:07.060921  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:07.091336  133241 cri.go:89] found id: ""
	I1210 01:10:07.091376  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.091386  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:07.091393  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:07.091510  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:07.122715  133241 cri.go:89] found id: ""
	I1210 01:10:07.122750  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.122762  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:07.122770  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:07.122822  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:07.154444  133241 cri.go:89] found id: ""
	I1210 01:10:07.154479  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.154490  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:07.154496  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:07.154575  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:07.189571  133241 cri.go:89] found id: ""
	I1210 01:10:07.189601  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.189614  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:07.189622  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:07.189683  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:07.224455  133241 cri.go:89] found id: ""
	I1210 01:10:07.224480  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.224489  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:07.224499  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:07.224512  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:07.240174  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:07.240214  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:07.344027  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:07.344062  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:07.344079  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:07.445219  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:07.445263  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:07.483205  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:07.483238  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:08.291335  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:10.789606  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:09.321098  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:11.321721  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:08.256670  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:10.256954  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:12.257117  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:10.034238  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:10.047042  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:10.047105  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:10.078622  133241 cri.go:89] found id: ""
	I1210 01:10:10.078654  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.078666  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:10.078675  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:10.078737  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:10.109353  133241 cri.go:89] found id: ""
	I1210 01:10:10.109379  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.109390  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:10.109398  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:10.109470  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:10.143036  133241 cri.go:89] found id: ""
	I1210 01:10:10.143065  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.143077  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:10.143084  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:10.143150  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:10.174938  133241 cri.go:89] found id: ""
	I1210 01:10:10.174966  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.174975  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:10.174981  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:10.175032  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:10.208680  133241 cri.go:89] found id: ""
	I1210 01:10:10.208709  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.208718  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:10.208724  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:10.208793  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:10.241153  133241 cri.go:89] found id: ""
	I1210 01:10:10.241189  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.241202  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:10.241213  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:10.241290  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:10.279405  133241 cri.go:89] found id: ""
	I1210 01:10:10.279437  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.279448  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:10.279457  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:10.279523  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:10.317915  133241 cri.go:89] found id: ""
	I1210 01:10:10.317943  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.317953  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:10.317964  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:10.317980  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:10.370920  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:10.370955  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:10.385823  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:10.385867  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:10.452746  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:10.452774  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:10.452793  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:10.535218  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:10.535291  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:13.075172  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:13.090707  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:13.090785  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:13.141780  133241 cri.go:89] found id: ""
	I1210 01:10:13.141804  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.141812  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:13.141818  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:13.141869  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:13.172241  133241 cri.go:89] found id: ""
	I1210 01:10:13.172263  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.172271  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:13.172277  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:13.172339  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:13.200378  133241 cri.go:89] found id: ""
	I1210 01:10:13.200401  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.200410  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:13.200415  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:13.200472  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:13.232921  133241 cri.go:89] found id: ""
	I1210 01:10:13.232952  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.232964  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:13.232972  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:13.233088  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:13.265305  133241 cri.go:89] found id: ""
	I1210 01:10:13.265333  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.265344  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:13.265352  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:13.265411  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:13.299192  133241 cri.go:89] found id: ""
	I1210 01:10:13.299216  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.299226  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:13.299233  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:13.299306  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:13.332156  133241 cri.go:89] found id: ""
	I1210 01:10:13.332184  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.332195  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:13.332202  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:13.332259  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:13.365450  133241 cri.go:89] found id: ""
	I1210 01:10:13.365484  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.365498  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:13.365511  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:13.365529  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:13.440807  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:13.440849  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:13.477283  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:13.477325  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:13.527481  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:13.527514  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:13.540146  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:13.540178  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:13.602711  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:12.790714  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:15.290963  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:13.820293  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:15.821845  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:14.755454  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:16.756574  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:16.103789  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:16.116124  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:16.116204  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:16.153057  133241 cri.go:89] found id: ""
	I1210 01:10:16.153082  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.153102  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:16.153109  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:16.153162  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:16.186489  133241 cri.go:89] found id: ""
	I1210 01:10:16.186517  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.186528  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:16.186535  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:16.186613  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:16.216369  133241 cri.go:89] found id: ""
	I1210 01:10:16.216404  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.216415  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:16.216423  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:16.216482  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:16.246254  133241 cri.go:89] found id: ""
	I1210 01:10:16.246282  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.246292  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:16.246299  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:16.246361  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:16.277815  133241 cri.go:89] found id: ""
	I1210 01:10:16.277844  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.277855  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:16.277866  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:16.277931  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:16.312101  133241 cri.go:89] found id: ""
	I1210 01:10:16.312132  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.312141  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:16.312147  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:16.312202  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:16.350273  133241 cri.go:89] found id: ""
	I1210 01:10:16.350299  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.350307  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:16.350313  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:16.350376  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:16.388091  133241 cri.go:89] found id: ""
	I1210 01:10:16.388113  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.388121  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:16.388130  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:16.388150  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:16.456039  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:16.456066  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:16.456085  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:16.534919  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:16.534950  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:16.581598  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:16.581639  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:16.631479  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:16.631515  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:19.143852  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:19.156229  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:19.156300  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:19.186482  133241 cri.go:89] found id: ""
	I1210 01:10:19.186506  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.186514  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:19.186521  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:19.186585  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:19.216945  133241 cri.go:89] found id: ""
	I1210 01:10:19.216967  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.216975  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:19.216983  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:19.217060  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:19.247628  133241 cri.go:89] found id: ""
	I1210 01:10:19.247656  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.247666  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:19.247672  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:19.247719  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:19.281256  133241 cri.go:89] found id: ""
	I1210 01:10:19.281287  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.281297  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:19.281303  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:19.281364  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:19.315123  133241 cri.go:89] found id: ""
	I1210 01:10:19.315156  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.315168  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:19.315176  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:19.315246  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:19.349687  133241 cri.go:89] found id: ""
	I1210 01:10:19.349714  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.349725  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:19.349733  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:19.349797  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:19.381019  133241 cri.go:89] found id: ""
	I1210 01:10:19.381046  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.381058  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:19.381065  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:19.381129  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:19.413983  133241 cri.go:89] found id: ""
	I1210 01:10:19.414023  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.414035  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:19.414048  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:19.414063  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:19.453812  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:19.453848  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:19.504016  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:19.504049  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:19.517665  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:19.517695  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:19.583777  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:19.583807  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:19.583825  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:17.790792  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:20.290934  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:17.821893  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:20.320787  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:19.256192  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:21.256740  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:22.160219  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:22.172908  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:22.172984  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:22.203634  133241 cri.go:89] found id: ""
	I1210 01:10:22.203665  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.203680  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:22.203689  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:22.203754  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:22.233632  133241 cri.go:89] found id: ""
	I1210 01:10:22.233660  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.233671  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:22.233679  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:22.233748  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:22.269679  133241 cri.go:89] found id: ""
	I1210 01:10:22.269704  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.269713  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:22.269719  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:22.269769  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:22.301819  133241 cri.go:89] found id: ""
	I1210 01:10:22.301850  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.301858  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:22.301864  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:22.301914  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:22.337435  133241 cri.go:89] found id: ""
	I1210 01:10:22.337470  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.337479  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:22.337494  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:22.337562  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:22.368920  133241 cri.go:89] found id: ""
	I1210 01:10:22.368944  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.368952  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:22.368957  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:22.369020  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:22.401157  133241 cri.go:89] found id: ""
	I1210 01:10:22.401188  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.401200  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:22.401211  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:22.401277  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:22.436278  133241 cri.go:89] found id: ""
	I1210 01:10:22.436317  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.436330  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:22.436343  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:22.436359  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:22.485320  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:22.485354  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:22.498225  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:22.498253  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:22.559918  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:22.559944  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:22.559961  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:22.636884  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:22.636919  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:22.291705  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:24.790056  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:26.790414  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:22.322051  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:24.821800  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:23.756797  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:25.757544  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:25.173302  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:25.185398  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:25.185481  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:25.215003  133241 cri.go:89] found id: ""
	I1210 01:10:25.215030  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.215038  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:25.215044  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:25.215106  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:25.247583  133241 cri.go:89] found id: ""
	I1210 01:10:25.247604  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.247613  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:25.247620  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:25.247679  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:25.282125  133241 cri.go:89] found id: ""
	I1210 01:10:25.282150  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.282158  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:25.282163  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:25.282220  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:25.317560  133241 cri.go:89] found id: ""
	I1210 01:10:25.317590  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.317599  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:25.317605  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:25.317666  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:25.354392  133241 cri.go:89] found id: ""
	I1210 01:10:25.354418  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.354430  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:25.354441  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:25.354510  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:25.392349  133241 cri.go:89] found id: ""
	I1210 01:10:25.392375  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.392384  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:25.392390  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:25.392442  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:25.429665  133241 cri.go:89] found id: ""
	I1210 01:10:25.429692  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.429702  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:25.429709  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:25.429766  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:25.466437  133241 cri.go:89] found id: ""
	I1210 01:10:25.466463  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.466476  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:25.466488  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:25.466503  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:25.480846  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:25.480885  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:25.548828  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:25.548861  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:25.548877  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:25.626942  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:25.626985  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:25.664081  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:25.664120  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:28.219032  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:28.233820  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:28.233886  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:28.267033  133241 cri.go:89] found id: ""
	I1210 01:10:28.267061  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.267072  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:28.267079  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:28.267133  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:28.304241  133241 cri.go:89] found id: ""
	I1210 01:10:28.304268  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.304276  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:28.304282  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:28.304329  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:28.339783  133241 cri.go:89] found id: ""
	I1210 01:10:28.339810  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.339817  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:28.339824  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:28.339897  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:28.371890  133241 cri.go:89] found id: ""
	I1210 01:10:28.371944  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.371957  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:28.371965  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:28.372033  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:28.409995  133241 cri.go:89] found id: ""
	I1210 01:10:28.410031  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.410042  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:28.410050  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:28.410122  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:28.443817  133241 cri.go:89] found id: ""
	I1210 01:10:28.443854  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.443866  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:28.443874  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:28.443943  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:28.476813  133241 cri.go:89] found id: ""
	I1210 01:10:28.476842  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.476850  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:28.476856  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:28.476918  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:28.509092  133241 cri.go:89] found id: ""
	I1210 01:10:28.509119  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.509129  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:28.509147  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:28.509166  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:28.582990  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:28.583021  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:28.624120  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:28.624152  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:28.673901  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:28.673942  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:28.686654  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:28.686684  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:28.754914  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:28.790925  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:31.291799  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:27.321458  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:29.820474  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:31.820865  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:28.257390  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:30.757194  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:31.256019  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:31.269297  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:31.269374  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:31.306032  133241 cri.go:89] found id: ""
	I1210 01:10:31.306063  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.306074  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:31.306082  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:31.306149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:31.339930  133241 cri.go:89] found id: ""
	I1210 01:10:31.339964  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.339976  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:31.339984  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:31.340049  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:31.371820  133241 cri.go:89] found id: ""
	I1210 01:10:31.371853  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.371865  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:31.371872  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:31.371929  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:31.406853  133241 cri.go:89] found id: ""
	I1210 01:10:31.406880  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.406888  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:31.406895  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:31.406973  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:31.441927  133241 cri.go:89] found id: ""
	I1210 01:10:31.441964  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.441983  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:31.441993  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:31.442059  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:31.475302  133241 cri.go:89] found id: ""
	I1210 01:10:31.475335  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.475347  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:31.475356  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:31.475422  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:31.508445  133241 cri.go:89] found id: ""
	I1210 01:10:31.508479  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.508489  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:31.508495  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:31.508549  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:31.542658  133241 cri.go:89] found id: ""
	I1210 01:10:31.542686  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.542694  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:31.542704  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:31.542720  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:31.591393  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:31.591432  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:31.604124  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:31.604152  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:31.670342  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:31.670381  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:31.670401  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:31.755216  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:31.755273  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:34.307218  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:34.321878  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:34.321951  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:34.355191  133241 cri.go:89] found id: ""
	I1210 01:10:34.355230  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.355238  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:34.355244  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:34.355300  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:34.392397  133241 cri.go:89] found id: ""
	I1210 01:10:34.392432  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.392445  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:34.392453  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:34.392522  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:34.424468  133241 cri.go:89] found id: ""
	I1210 01:10:34.424496  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.424513  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:34.424519  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:34.424568  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:34.456966  133241 cri.go:89] found id: ""
	I1210 01:10:34.456990  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.457000  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:34.457006  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:34.457057  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:34.491830  133241 cri.go:89] found id: ""
	I1210 01:10:34.491863  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.491874  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:34.491882  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:34.491949  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:34.523409  133241 cri.go:89] found id: ""
	I1210 01:10:34.523441  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.523455  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:34.523464  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:34.523520  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:34.555092  133241 cri.go:89] found id: ""
	I1210 01:10:34.555125  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.555136  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:34.555143  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:34.555211  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:34.585491  133241 cri.go:89] found id: ""
	I1210 01:10:34.585521  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.585530  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:34.585540  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:34.585553  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:34.598250  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:34.598281  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:10:33.790899  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:35.791148  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:34.321870  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:36.821430  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:32.757323  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:35.256735  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:37.257310  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	W1210 01:10:34.662759  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:34.662784  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:34.662797  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:34.740495  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:34.740537  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:34.777192  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:34.777231  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:37.329212  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:37.342322  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:37.342397  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:37.374083  133241 cri.go:89] found id: ""
	I1210 01:10:37.374114  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.374124  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:37.374133  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:37.374202  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:37.404838  133241 cri.go:89] found id: ""
	I1210 01:10:37.404872  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.404880  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:37.404886  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:37.404948  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:37.439471  133241 cri.go:89] found id: ""
	I1210 01:10:37.439503  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.439515  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:37.439523  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:37.439598  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:37.473725  133241 cri.go:89] found id: ""
	I1210 01:10:37.473756  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.473765  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:37.473770  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:37.473822  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:37.507449  133241 cri.go:89] found id: ""
	I1210 01:10:37.507478  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.507491  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:37.507498  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:37.507565  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:37.538432  133241 cri.go:89] found id: ""
	I1210 01:10:37.538468  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.538479  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:37.538490  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:37.538583  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:37.571690  133241 cri.go:89] found id: ""
	I1210 01:10:37.571716  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.571724  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:37.571730  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:37.571787  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:37.606988  133241 cri.go:89] found id: ""
	I1210 01:10:37.607017  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.607026  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:37.607036  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:37.607048  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:37.655260  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:37.655290  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:37.667647  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:37.667672  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:37.734898  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:37.734955  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:37.734971  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:37.823654  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:37.823690  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:37.792020  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:40.290220  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:39.323412  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:41.822486  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:39.759358  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:42.256854  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:40.361513  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:40.374995  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:40.375054  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:40.407043  133241 cri.go:89] found id: ""
	I1210 01:10:40.407077  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.407086  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:40.407091  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:40.407146  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:40.438613  133241 cri.go:89] found id: ""
	I1210 01:10:40.438644  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.438655  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:40.438663  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:40.438725  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:40.468747  133241 cri.go:89] found id: ""
	I1210 01:10:40.468781  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.468794  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:40.468801  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:40.468873  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:40.501670  133241 cri.go:89] found id: ""
	I1210 01:10:40.501700  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.501708  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:40.501714  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:40.501762  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:40.531671  133241 cri.go:89] found id: ""
	I1210 01:10:40.531694  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.531704  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:40.531712  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:40.531769  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:40.562804  133241 cri.go:89] found id: ""
	I1210 01:10:40.562827  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.562836  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:40.562847  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:40.562909  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:40.593286  133241 cri.go:89] found id: ""
	I1210 01:10:40.593309  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.593318  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:40.593323  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:40.593369  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:40.624387  133241 cri.go:89] found id: ""
	I1210 01:10:40.624424  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.624438  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:40.624452  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:40.624479  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:40.636616  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:40.636643  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:40.703044  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:40.703071  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:40.703089  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:40.782186  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:40.782220  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:40.824410  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:40.824434  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:43.377460  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:43.391624  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:43.391704  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:43.424454  133241 cri.go:89] found id: ""
	I1210 01:10:43.424489  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.424499  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:43.424505  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:43.424570  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:43.454067  133241 cri.go:89] found id: ""
	I1210 01:10:43.454094  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.454102  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:43.454108  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:43.454160  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:43.485905  133241 cri.go:89] found id: ""
	I1210 01:10:43.485938  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.485949  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:43.485956  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:43.486021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:43.516402  133241 cri.go:89] found id: ""
	I1210 01:10:43.516427  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.516435  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:43.516447  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:43.516521  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:43.549049  133241 cri.go:89] found id: ""
	I1210 01:10:43.549102  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.549114  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:43.549124  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:43.549181  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:43.582610  133241 cri.go:89] found id: ""
	I1210 01:10:43.582641  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.582652  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:43.582661  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:43.582720  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:43.614392  133241 cri.go:89] found id: ""
	I1210 01:10:43.614424  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.614435  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:43.614442  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:43.614507  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:43.646797  133241 cri.go:89] found id: ""
	I1210 01:10:43.646830  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.646842  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:43.646855  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:43.646872  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:43.682884  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:43.682921  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:43.739117  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:43.739159  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:43.754008  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:43.754047  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:43.825110  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:43.825140  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:43.825156  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:42.290697  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:44.790711  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.791942  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:44.321563  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.821954  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:44.756178  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.757399  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.401040  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:46.414417  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:46.414515  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:46.446832  133241 cri.go:89] found id: ""
	I1210 01:10:46.446861  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.446871  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:46.446879  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:46.446945  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:46.480534  133241 cri.go:89] found id: ""
	I1210 01:10:46.480566  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.480577  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:46.480584  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:46.480649  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:46.512706  133241 cri.go:89] found id: ""
	I1210 01:10:46.512735  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.512745  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:46.512752  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:46.512818  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:46.545769  133241 cri.go:89] found id: ""
	I1210 01:10:46.545803  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.545815  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:46.545823  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:46.545889  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:46.575715  133241 cri.go:89] found id: ""
	I1210 01:10:46.575750  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.575762  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:46.575769  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:46.575834  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:46.605133  133241 cri.go:89] found id: ""
	I1210 01:10:46.605164  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.605175  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:46.605183  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:46.605235  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:46.635536  133241 cri.go:89] found id: ""
	I1210 01:10:46.635571  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.635582  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:46.635589  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:46.635650  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:46.665579  133241 cri.go:89] found id: ""
	I1210 01:10:46.665608  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.665617  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:46.665627  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:46.665637  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:46.749766  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:46.749806  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:46.788690  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:46.788725  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:46.841860  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:46.841888  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:46.870621  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:46.870651  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:46.943532  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:49.444707  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:49.457003  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:49.457071  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:49.489757  133241 cri.go:89] found id: ""
	I1210 01:10:49.489791  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.489802  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:49.489809  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:49.489859  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:49.519808  133241 cri.go:89] found id: ""
	I1210 01:10:49.519832  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.519839  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:49.519844  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:49.519895  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:49.552725  133241 cri.go:89] found id: ""
	I1210 01:10:49.552748  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.552756  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:49.552762  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:49.552816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:49.583657  133241 cri.go:89] found id: ""
	I1210 01:10:49.583686  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.583699  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:49.583710  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:49.583771  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:49.614520  133241 cri.go:89] found id: ""
	I1210 01:10:49.614547  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.614569  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:49.614579  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:49.614644  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:49.290385  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:51.291504  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:49.321277  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:51.321612  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:49.256723  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:51.257348  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:49.646739  133241 cri.go:89] found id: ""
	I1210 01:10:49.646788  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.646800  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:49.646811  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:49.646871  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:49.680156  133241 cri.go:89] found id: ""
	I1210 01:10:49.680184  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.680195  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:49.680203  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:49.680271  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:49.711052  133241 cri.go:89] found id: ""
	I1210 01:10:49.711090  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.711103  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:49.711115  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:49.711133  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:49.765139  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:49.765173  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:49.777581  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:49.777612  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:49.842857  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:49.842882  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:49.842897  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:49.923492  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:49.923529  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:52.465282  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:52.478468  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:52.478535  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:52.514379  133241 cri.go:89] found id: ""
	I1210 01:10:52.514411  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.514420  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:52.514426  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:52.514481  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:52.545952  133241 cri.go:89] found id: ""
	I1210 01:10:52.545981  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.545991  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:52.545999  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:52.546063  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:52.581959  133241 cri.go:89] found id: ""
	I1210 01:10:52.581986  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.581995  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:52.582003  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:52.582109  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:52.634648  133241 cri.go:89] found id: ""
	I1210 01:10:52.634674  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.634686  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:52.634693  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:52.634753  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:52.668485  133241 cri.go:89] found id: ""
	I1210 01:10:52.668509  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.668518  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:52.668524  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:52.668587  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:52.702030  133241 cri.go:89] found id: ""
	I1210 01:10:52.702058  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.702067  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:52.702074  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:52.702139  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:52.736618  133241 cri.go:89] found id: ""
	I1210 01:10:52.736647  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.736655  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:52.736661  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:52.736728  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:52.769400  133241 cri.go:89] found id: ""
	I1210 01:10:52.769427  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.769436  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:52.769444  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:52.769462  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:52.808900  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:52.808936  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:52.861032  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:52.861067  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:52.874251  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:52.874281  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:52.946117  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:52.946145  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:52.946174  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:53.790452  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:55.791486  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:53.820716  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:55.822118  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:53.756664  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:56.255828  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:55.526812  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:55.541146  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:55.541232  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:55.582382  133241 cri.go:89] found id: ""
	I1210 01:10:55.582414  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.582424  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:55.582430  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:55.582483  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:55.620756  133241 cri.go:89] found id: ""
	I1210 01:10:55.620781  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.620790  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:55.620795  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:55.620865  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:55.657136  133241 cri.go:89] found id: ""
	I1210 01:10:55.657173  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.657184  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:55.657192  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:55.657253  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:55.691809  133241 cri.go:89] found id: ""
	I1210 01:10:55.691836  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.691844  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:55.691850  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:55.691901  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:55.725747  133241 cri.go:89] found id: ""
	I1210 01:10:55.725782  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.725794  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:55.725802  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:55.725870  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:55.758656  133241 cri.go:89] found id: ""
	I1210 01:10:55.758686  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.758697  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:55.758704  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:55.758766  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:55.791407  133241 cri.go:89] found id: ""
	I1210 01:10:55.791437  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.791447  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:55.791453  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:55.791522  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:55.823238  133241 cri.go:89] found id: ""
	I1210 01:10:55.823259  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.823269  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:55.823277  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:55.823288  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:55.858051  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:55.858090  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:55.910896  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:55.910928  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:55.923792  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:55.923814  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:55.994264  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:55.994283  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:55.994297  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:58.570410  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:58.582632  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:58.582709  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:58.614706  133241 cri.go:89] found id: ""
	I1210 01:10:58.614741  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.614752  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:58.614759  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:58.614820  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:58.645853  133241 cri.go:89] found id: ""
	I1210 01:10:58.645880  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.645888  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:58.645893  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:58.645946  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:58.681278  133241 cri.go:89] found id: ""
	I1210 01:10:58.681305  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.681313  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:58.681319  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:58.681376  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:58.715312  133241 cri.go:89] found id: ""
	I1210 01:10:58.715344  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.715356  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:58.715364  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:58.715434  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:58.753150  133241 cri.go:89] found id: ""
	I1210 01:10:58.753182  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.753193  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:58.753201  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:58.753275  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:58.792337  133241 cri.go:89] found id: ""
	I1210 01:10:58.792363  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.792371  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:58.792377  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:58.792424  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:58.824538  133241 cri.go:89] found id: ""
	I1210 01:10:58.824562  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.824569  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:58.824575  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:58.824626  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:58.859699  133241 cri.go:89] found id: ""
	I1210 01:10:58.859733  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.859745  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:58.859755  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:58.859768  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:58.874557  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:58.874607  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:58.942377  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:58.942399  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:58.942413  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:59.020700  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:59.020743  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:59.092780  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:59.092820  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:58.290069  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:00.290277  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:58.321783  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:00.820779  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:58.256816  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:00.756307  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:01.656942  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:01.670706  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:01.670790  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:01.704182  133241 cri.go:89] found id: ""
	I1210 01:11:01.704222  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.704235  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:01.704242  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:01.704295  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:01.737176  133241 cri.go:89] found id: ""
	I1210 01:11:01.737207  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.737216  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:01.737222  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:01.737279  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:01.771891  133241 cri.go:89] found id: ""
	I1210 01:11:01.771924  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.771935  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:01.771943  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:01.772001  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:01.804964  133241 cri.go:89] found id: ""
	I1210 01:11:01.804994  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.805005  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:01.805026  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:01.805101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:01.837156  133241 cri.go:89] found id: ""
	I1210 01:11:01.837184  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.837195  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:01.837203  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:01.837260  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:01.866759  133241 cri.go:89] found id: ""
	I1210 01:11:01.866783  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.866793  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:01.866802  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:01.866868  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:01.897349  133241 cri.go:89] found id: ""
	I1210 01:11:01.897377  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.897387  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:01.897394  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:01.897452  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:01.928390  133241 cri.go:89] found id: ""
	I1210 01:11:01.928419  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.928430  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:01.928442  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:01.928462  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:01.995531  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:01.995558  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:01.995572  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:02.073144  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:02.073178  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:02.107235  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:02.107266  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:02.159959  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:02.159993  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:02.789938  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:04.790544  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:02.821058  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:04.822126  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:02.756968  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:05.255943  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:07.256779  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:04.672775  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:04.686495  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:04.686604  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:04.720867  133241 cri.go:89] found id: ""
	I1210 01:11:04.720977  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.721005  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:04.721034  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:04.721143  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:04.757796  133241 cri.go:89] found id: ""
	I1210 01:11:04.757823  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.757831  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:04.757837  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:04.757896  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:04.799823  133241 cri.go:89] found id: ""
	I1210 01:11:04.799848  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.799856  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:04.799861  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:04.799921  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:04.848259  133241 cri.go:89] found id: ""
	I1210 01:11:04.848291  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.848303  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:04.848312  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:04.848392  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:04.898530  133241 cri.go:89] found id: ""
	I1210 01:11:04.898583  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.898596  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:04.898605  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:04.898673  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:04.935954  133241 cri.go:89] found id: ""
	I1210 01:11:04.935979  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.935987  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:04.935992  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:04.936037  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:04.970503  133241 cri.go:89] found id: ""
	I1210 01:11:04.970531  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.970538  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:04.970544  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:04.970627  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:05.003257  133241 cri.go:89] found id: ""
	I1210 01:11:05.003280  133241 logs.go:282] 0 containers: []
	W1210 01:11:05.003289  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:05.003298  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:05.003311  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:05.053816  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:05.053849  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:05.066024  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:05.066056  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:05.129515  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:05.129542  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:05.129559  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:05.203823  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:05.203861  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:07.743773  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:07.756948  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:07.757021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:07.790298  133241 cri.go:89] found id: ""
	I1210 01:11:07.790326  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.790334  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:07.790341  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:07.790432  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:07.822653  133241 cri.go:89] found id: ""
	I1210 01:11:07.822682  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.822693  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:07.822700  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:07.822754  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:07.856125  133241 cri.go:89] found id: ""
	I1210 01:11:07.856160  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.856171  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:07.856178  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:07.856247  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:07.888297  133241 cri.go:89] found id: ""
	I1210 01:11:07.888321  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.888329  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:07.888336  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:07.888394  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:07.919131  133241 cri.go:89] found id: ""
	I1210 01:11:07.919159  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.919170  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:07.919177  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:07.919245  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:07.954289  133241 cri.go:89] found id: ""
	I1210 01:11:07.954320  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.954332  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:07.954340  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:07.954396  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:07.985447  133241 cri.go:89] found id: ""
	I1210 01:11:07.985482  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.985497  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:07.985505  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:07.985560  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:08.016461  133241 cri.go:89] found id: ""
	I1210 01:11:08.016491  133241 logs.go:282] 0 containers: []
	W1210 01:11:08.016504  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:08.016516  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:08.016534  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:08.051346  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:08.051386  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:08.101708  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:08.101741  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:08.113883  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:08.113912  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:08.174656  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:08.174681  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:08.174696  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:07.289462  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:09.290707  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:11.790555  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:07.322137  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:09.821004  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:11.821064  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:09.757877  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:12.256156  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:10.751754  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:10.768007  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:10.768071  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:10.814141  133241 cri.go:89] found id: ""
	I1210 01:11:10.814167  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.814177  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:10.814187  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:10.814255  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:10.864355  133241 cri.go:89] found id: ""
	I1210 01:11:10.864379  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.864387  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:10.864392  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:10.864464  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:10.917533  133241 cri.go:89] found id: ""
	I1210 01:11:10.917563  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.917572  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:10.917579  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:10.917644  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:10.949555  133241 cri.go:89] found id: ""
	I1210 01:11:10.949589  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.949601  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:10.949609  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:10.949668  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:10.982997  133241 cri.go:89] found id: ""
	I1210 01:11:10.983022  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.983030  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:10.983036  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:10.983101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:11.016318  133241 cri.go:89] found id: ""
	I1210 01:11:11.016348  133241 logs.go:282] 0 containers: []
	W1210 01:11:11.016359  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:11.016366  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:11.016460  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:11.045980  133241 cri.go:89] found id: ""
	I1210 01:11:11.046004  133241 logs.go:282] 0 containers: []
	W1210 01:11:11.046012  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:11.046018  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:11.046067  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:11.074303  133241 cri.go:89] found id: ""
	I1210 01:11:11.074329  133241 logs.go:282] 0 containers: []
	W1210 01:11:11.074336  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:11.074346  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:11.074357  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:11.108874  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:11.108907  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:11.156642  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:11.156672  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:11.168505  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:11.168527  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:11.239949  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:11.239976  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:11.239994  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:13.828538  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:13.841876  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:13.841929  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:13.872854  133241 cri.go:89] found id: ""
	I1210 01:11:13.872884  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.872896  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:13.872904  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:13.872955  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:13.903759  133241 cri.go:89] found id: ""
	I1210 01:11:13.903790  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.903803  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:13.903812  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:13.903877  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:13.938898  133241 cri.go:89] found id: ""
	I1210 01:11:13.938921  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.938929  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:13.938934  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:13.938992  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:13.979322  133241 cri.go:89] found id: ""
	I1210 01:11:13.979343  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.979351  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:13.979358  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:13.979419  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:14.012959  133241 cri.go:89] found id: ""
	I1210 01:11:14.012984  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.012993  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:14.012999  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:14.013048  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:14.050248  133241 cri.go:89] found id: ""
	I1210 01:11:14.050274  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.050282  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:14.050288  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:14.050337  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:14.086029  133241 cri.go:89] found id: ""
	I1210 01:11:14.086061  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.086072  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:14.086080  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:14.086149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:14.119966  133241 cri.go:89] found id: ""
	I1210 01:11:14.119994  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.120002  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:14.120012  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:14.120025  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:14.133378  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:14.133406  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:14.199060  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:14.199093  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:14.199108  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:14.282056  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:14.282089  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:14.321155  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:14.321182  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:13.790898  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.290292  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:13.821872  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.320917  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:14.257094  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.755448  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.871040  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:16.882350  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:16.882417  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:16.911877  133241 cri.go:89] found id: ""
	I1210 01:11:16.911910  133241 logs.go:282] 0 containers: []
	W1210 01:11:16.911922  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:16.911930  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:16.911993  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:16.946898  133241 cri.go:89] found id: ""
	I1210 01:11:16.946931  133241 logs.go:282] 0 containers: []
	W1210 01:11:16.946945  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:16.946952  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:16.947021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:16.979154  133241 cri.go:89] found id: ""
	I1210 01:11:16.979185  133241 logs.go:282] 0 containers: []
	W1210 01:11:16.979196  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:16.979209  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:16.979293  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:17.008977  133241 cri.go:89] found id: ""
	I1210 01:11:17.009010  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.009021  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:17.009028  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:17.009093  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:17.041399  133241 cri.go:89] found id: ""
	I1210 01:11:17.041431  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.041440  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:17.041446  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:17.041505  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:17.074254  133241 cri.go:89] found id: ""
	I1210 01:11:17.074284  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.074295  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:17.074305  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:17.074385  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:17.104982  133241 cri.go:89] found id: ""
	I1210 01:11:17.105015  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.105025  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:17.105033  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:17.105094  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:17.135240  133241 cri.go:89] found id: ""
	I1210 01:11:17.135265  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.135275  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:17.135286  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:17.135298  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:17.186952  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:17.187004  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:17.201444  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:17.201472  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:17.272210  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:17.272229  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:17.272245  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:17.355218  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:17.355256  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:18.290407  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:20.292289  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:18.321390  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:20.321550  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:18.756823  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:21.256882  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:19.892863  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:19.905069  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:19.905138  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:19.943515  133241 cri.go:89] found id: ""
	I1210 01:11:19.943544  133241 logs.go:282] 0 containers: []
	W1210 01:11:19.943557  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:19.943566  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:19.943629  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:19.974474  133241 cri.go:89] found id: ""
	I1210 01:11:19.974499  133241 logs.go:282] 0 containers: []
	W1210 01:11:19.974509  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:19.974517  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:19.974597  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:20.008980  133241 cri.go:89] found id: ""
	I1210 01:11:20.009011  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.009023  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:20.009030  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:20.009097  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:20.040655  133241 cri.go:89] found id: ""
	I1210 01:11:20.040681  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.040690  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:20.040696  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:20.040745  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:20.073761  133241 cri.go:89] found id: ""
	I1210 01:11:20.073788  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.073799  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:20.073806  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:20.073873  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:20.104381  133241 cri.go:89] found id: ""
	I1210 01:11:20.104410  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.104421  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:20.104429  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:20.104489  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:20.138130  133241 cri.go:89] found id: ""
	I1210 01:11:20.138158  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.138167  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:20.138173  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:20.138229  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:20.166883  133241 cri.go:89] found id: ""
	I1210 01:11:20.166908  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.166916  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:20.166926  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:20.166940  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:20.199437  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:20.199470  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:20.247384  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:20.247418  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:20.260363  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:20.260392  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:20.330260  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:20.330283  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:20.330299  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:22.912818  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:22.925241  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:22.925316  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:22.957975  133241 cri.go:89] found id: ""
	I1210 01:11:22.958003  133241 logs.go:282] 0 containers: []
	W1210 01:11:22.958015  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:22.958023  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:22.958087  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:22.991067  133241 cri.go:89] found id: ""
	I1210 01:11:22.991098  133241 logs.go:282] 0 containers: []
	W1210 01:11:22.991109  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:22.991117  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:22.991177  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:23.022191  133241 cri.go:89] found id: ""
	I1210 01:11:23.022280  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.022297  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:23.022307  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:23.022373  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:23.055399  133241 cri.go:89] found id: ""
	I1210 01:11:23.055427  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.055435  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:23.055440  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:23.055504  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:23.085084  133241 cri.go:89] found id: ""
	I1210 01:11:23.085114  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.085126  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:23.085133  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:23.085195  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:23.114896  133241 cri.go:89] found id: ""
	I1210 01:11:23.114921  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.114929  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:23.114935  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:23.114995  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:23.146419  133241 cri.go:89] found id: ""
	I1210 01:11:23.146450  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.146463  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:23.146470  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:23.146546  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:23.178747  133241 cri.go:89] found id: ""
	I1210 01:11:23.178774  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.178782  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:23.178792  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:23.178804  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:23.230574  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:23.230609  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:23.242622  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:23.242649  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:23.315830  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:23.315850  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:23.315862  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:23.394054  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:23.394091  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:22.790004  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:24.790395  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:26.790583  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:22.821008  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:25.321294  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:23.758460  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:26.257243  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:25.930799  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:25.943287  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:25.943351  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:25.975836  133241 cri.go:89] found id: ""
	I1210 01:11:25.975866  133241 logs.go:282] 0 containers: []
	W1210 01:11:25.975877  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:25.975884  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:25.975948  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:26.008518  133241 cri.go:89] found id: ""
	I1210 01:11:26.008545  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.008553  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:26.008560  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:26.008607  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:26.041953  133241 cri.go:89] found id: ""
	I1210 01:11:26.041992  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.042002  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:26.042009  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:26.042076  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:26.071782  133241 cri.go:89] found id: ""
	I1210 01:11:26.071809  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.071821  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:26.071829  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:26.071894  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:26.101051  133241 cri.go:89] found id: ""
	I1210 01:11:26.101075  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.101084  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:26.101089  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:26.101135  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:26.135274  133241 cri.go:89] found id: ""
	I1210 01:11:26.135300  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.135308  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:26.135315  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:26.135368  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:26.168190  133241 cri.go:89] found id: ""
	I1210 01:11:26.168216  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.168224  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:26.168230  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:26.168293  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:26.198453  133241 cri.go:89] found id: ""
	I1210 01:11:26.198482  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.198492  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:26.198505  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:26.198524  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:26.211436  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:26.211460  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:26.273940  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:26.273964  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:26.273980  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:26.353198  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:26.353232  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:26.389823  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:26.389857  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:28.940375  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:28.952619  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:28.952676  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:28.984886  133241 cri.go:89] found id: ""
	I1210 01:11:28.984914  133241 logs.go:282] 0 containers: []
	W1210 01:11:28.984923  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:28.984929  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:28.984978  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:29.015424  133241 cri.go:89] found id: ""
	I1210 01:11:29.015453  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.015463  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:29.015469  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:29.015520  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:29.045941  133241 cri.go:89] found id: ""
	I1210 01:11:29.045977  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.045989  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:29.045997  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:29.046065  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:29.077346  133241 cri.go:89] found id: ""
	I1210 01:11:29.077375  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.077384  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:29.077389  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:29.077442  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:29.109825  133241 cri.go:89] found id: ""
	I1210 01:11:29.109861  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.109873  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:29.109880  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:29.109946  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:29.141601  133241 cri.go:89] found id: ""
	I1210 01:11:29.141633  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.141645  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:29.141656  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:29.141720  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:29.172711  133241 cri.go:89] found id: ""
	I1210 01:11:29.172747  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.172758  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:29.172766  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:29.172830  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:29.205247  133241 cri.go:89] found id: ""
	I1210 01:11:29.205272  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.205283  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:29.205296  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:29.205310  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:29.255917  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:29.255954  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:29.269246  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:29.269276  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:29.339509  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:29.339535  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:29.339550  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:29.414320  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:29.414358  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:29.291191  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:31.790102  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:27.820810  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:30.321256  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:28.756034  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:30.757633  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:31.950667  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:31.963020  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:31.963083  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:31.994537  133241 cri.go:89] found id: ""
	I1210 01:11:31.994586  133241 logs.go:282] 0 containers: []
	W1210 01:11:31.994598  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:31.994606  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:31.994672  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:32.028601  133241 cri.go:89] found id: ""
	I1210 01:11:32.028632  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.028643  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:32.028651  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:32.028710  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:32.060238  133241 cri.go:89] found id: ""
	I1210 01:11:32.060265  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.060273  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:32.060280  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:32.060344  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:32.094421  133241 cri.go:89] found id: ""
	I1210 01:11:32.094446  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.094454  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:32.094460  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:32.094509  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:32.128237  133241 cri.go:89] found id: ""
	I1210 01:11:32.128266  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.128277  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:32.128285  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:32.128355  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:32.163139  133241 cri.go:89] found id: ""
	I1210 01:11:32.163163  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.163172  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:32.163179  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:32.163237  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:32.194077  133241 cri.go:89] found id: ""
	I1210 01:11:32.194108  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.194119  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:32.194126  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:32.194187  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:32.224914  133241 cri.go:89] found id: ""
	I1210 01:11:32.224941  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.224952  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:32.224964  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:32.224980  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:32.275194  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:32.275230  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:32.287642  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:32.287670  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:32.350922  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:32.350953  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:32.350971  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:32.431573  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:32.431610  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:33.790816  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:35.791330  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:32.321475  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:34.823056  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:33.256524  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:35.755851  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:34.969741  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:34.982487  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:34.982541  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:35.015370  133241 cri.go:89] found id: ""
	I1210 01:11:35.015408  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.015419  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:35.015428  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:35.015494  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:35.047381  133241 cri.go:89] found id: ""
	I1210 01:11:35.047418  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.047430  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:35.047437  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:35.047501  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:35.077282  133241 cri.go:89] found id: ""
	I1210 01:11:35.077305  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.077314  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:35.077320  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:35.077380  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:35.107625  133241 cri.go:89] found id: ""
	I1210 01:11:35.107653  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.107664  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:35.107671  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:35.107723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:35.137919  133241 cri.go:89] found id: ""
	I1210 01:11:35.137949  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.137962  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:35.137970  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:35.138037  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:35.170914  133241 cri.go:89] found id: ""
	I1210 01:11:35.170939  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.170947  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:35.170962  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:35.171021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:35.201719  133241 cri.go:89] found id: ""
	I1210 01:11:35.201747  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.201755  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:35.201761  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:35.201821  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:35.230544  133241 cri.go:89] found id: ""
	I1210 01:11:35.230582  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.230595  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:35.230607  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:35.230622  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:35.243184  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:35.243210  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:35.311888  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:35.311915  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:35.311931  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:35.387377  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:35.387411  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:35.424087  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:35.424121  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:37.977530  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:37.989741  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:37.989811  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:38.023765  133241 cri.go:89] found id: ""
	I1210 01:11:38.023789  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.023799  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:38.023808  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:38.023871  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:38.060456  133241 cri.go:89] found id: ""
	I1210 01:11:38.060487  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.060498  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:38.060505  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:38.060558  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:38.092589  133241 cri.go:89] found id: ""
	I1210 01:11:38.092612  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.092620  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:38.092626  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:38.092676  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:38.126075  133241 cri.go:89] found id: ""
	I1210 01:11:38.126115  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.126127  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:38.126137  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:38.126216  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:38.158861  133241 cri.go:89] found id: ""
	I1210 01:11:38.158892  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.158905  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:38.158911  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:38.158966  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:38.189136  133241 cri.go:89] found id: ""
	I1210 01:11:38.189164  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.189172  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:38.189178  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:38.189227  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:38.220497  133241 cri.go:89] found id: ""
	I1210 01:11:38.220522  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.220530  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:38.220536  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:38.220585  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:38.253480  133241 cri.go:89] found id: ""
	I1210 01:11:38.253515  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.253527  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:38.253539  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:38.253554  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:38.334967  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:38.335006  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:38.375521  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:38.375551  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:38.429375  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:38.429419  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:38.442488  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:38.442527  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:38.504243  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:38.290594  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:40.290705  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:37.322067  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:39.822004  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:37.756517  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:40.256112  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:42.256624  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:41.005015  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:41.018073  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:41.018149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:41.049377  133241 cri.go:89] found id: ""
	I1210 01:11:41.049409  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.049421  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:41.049429  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:41.049495  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:41.080430  133241 cri.go:89] found id: ""
	I1210 01:11:41.080466  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.080476  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:41.080482  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:41.080543  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:41.113179  133241 cri.go:89] found id: ""
	I1210 01:11:41.113210  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.113222  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:41.113229  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:41.113298  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:41.144493  133241 cri.go:89] found id: ""
	I1210 01:11:41.144523  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.144535  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:41.144545  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:41.144612  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:41.174786  133241 cri.go:89] found id: ""
	I1210 01:11:41.174818  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.174828  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:41.174835  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:41.174903  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:41.205010  133241 cri.go:89] found id: ""
	I1210 01:11:41.205050  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.205063  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:41.205072  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:41.205142  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:41.236095  133241 cri.go:89] found id: ""
	I1210 01:11:41.236120  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.236131  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:41.236138  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:41.236200  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:41.267610  133241 cri.go:89] found id: ""
	I1210 01:11:41.267639  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.267654  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:41.267665  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:41.267681  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:41.302639  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:41.302669  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:41.352311  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:41.352343  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:41.365111  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:41.365140  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:41.434174  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:41.434197  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:41.434214  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:44.018219  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:44.030886  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:44.030961  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:44.072932  133241 cri.go:89] found id: ""
	I1210 01:11:44.072954  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.072962  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:44.072968  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:44.073015  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:44.110425  133241 cri.go:89] found id: ""
	I1210 01:11:44.110456  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.110466  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:44.110473  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:44.110539  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:44.148811  133241 cri.go:89] found id: ""
	I1210 01:11:44.148837  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.148848  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:44.148855  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:44.148922  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:44.184181  133241 cri.go:89] found id: ""
	I1210 01:11:44.184205  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.184213  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:44.184219  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:44.184268  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:44.213545  133241 cri.go:89] found id: ""
	I1210 01:11:44.213578  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.213590  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:44.213597  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:44.213658  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:44.246979  133241 cri.go:89] found id: ""
	I1210 01:11:44.247012  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.247024  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:44.247032  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:44.247095  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:44.280902  133241 cri.go:89] found id: ""
	I1210 01:11:44.280939  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.280950  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:44.280958  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:44.281035  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:44.310824  133241 cri.go:89] found id: ""
	I1210 01:11:44.310848  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.310859  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:44.310870  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:44.310887  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:44.389324  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:44.389354  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:44.425351  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:44.425388  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:44.478151  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:44.478197  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:44.491139  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:44.491171  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:44.552150  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:42.790792  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:45.289730  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:42.321108  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:44.321367  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:46.820868  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:44.258348  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:46.756838  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:47.052917  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:47.065698  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:47.065764  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:47.098483  133241 cri.go:89] found id: ""
	I1210 01:11:47.098518  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.098530  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:47.098538  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:47.098617  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:47.129042  133241 cri.go:89] found id: ""
	I1210 01:11:47.129073  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.129082  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:47.129088  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:47.129157  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:47.160050  133241 cri.go:89] found id: ""
	I1210 01:11:47.160083  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.160094  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:47.160101  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:47.160167  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:47.190078  133241 cri.go:89] found id: ""
	I1210 01:11:47.190111  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.190120  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:47.190126  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:47.190180  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:47.218975  133241 cri.go:89] found id: ""
	I1210 01:11:47.219007  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.219020  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:47.219028  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:47.219088  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:47.248644  133241 cri.go:89] found id: ""
	I1210 01:11:47.248679  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.248689  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:47.248694  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:47.248743  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:47.284306  133241 cri.go:89] found id: ""
	I1210 01:11:47.284332  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.284339  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:47.284345  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:47.284394  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:47.314682  133241 cri.go:89] found id: ""
	I1210 01:11:47.314704  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.314712  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:47.314721  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:47.314733  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:47.365334  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:47.365364  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:47.378184  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:47.378215  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:47.445591  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:47.445619  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:47.445642  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:47.523176  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:47.523214  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:47.291212  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:49.790326  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:51.790425  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:48.821947  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:51.321998  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:49.255902  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:51.256638  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:50.059060  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:50.071413  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:50.071489  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:50.104600  133241 cri.go:89] found id: ""
	I1210 01:11:50.104632  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.104644  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:50.104652  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:50.104715  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:50.136915  133241 cri.go:89] found id: ""
	I1210 01:11:50.136947  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.136957  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:50.136968  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:50.137038  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:50.172552  133241 cri.go:89] found id: ""
	I1210 01:11:50.172582  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.172593  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:50.172604  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:50.172668  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:50.202583  133241 cri.go:89] found id: ""
	I1210 01:11:50.202613  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.202626  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:50.202634  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:50.202696  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:50.232446  133241 cri.go:89] found id: ""
	I1210 01:11:50.232473  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.232483  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:50.232491  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:50.232555  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:50.271296  133241 cri.go:89] found id: ""
	I1210 01:11:50.271321  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.271332  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:50.271340  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:50.271404  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:50.304185  133241 cri.go:89] found id: ""
	I1210 01:11:50.304216  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.304227  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:50.304235  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:50.304298  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:50.338004  133241 cri.go:89] found id: ""
	I1210 01:11:50.338030  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.338041  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:50.338051  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:50.338066  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:50.374374  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:50.374403  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:50.427315  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:50.427346  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:50.439862  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:50.439890  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:50.505410  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:50.505441  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:50.505458  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:53.081065  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:53.093760  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:53.093816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:53.126125  133241 cri.go:89] found id: ""
	I1210 01:11:53.126160  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.126172  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:53.126180  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:53.126252  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:53.157694  133241 cri.go:89] found id: ""
	I1210 01:11:53.157719  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.157727  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:53.157732  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:53.157787  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:53.188784  133241 cri.go:89] found id: ""
	I1210 01:11:53.188812  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.188820  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:53.188826  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:53.188882  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:53.220025  133241 cri.go:89] found id: ""
	I1210 01:11:53.220056  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.220066  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:53.220074  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:53.220133  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:53.254601  133241 cri.go:89] found id: ""
	I1210 01:11:53.254632  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.254641  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:53.254649  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:53.254718  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:53.286858  133241 cri.go:89] found id: ""
	I1210 01:11:53.286896  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.286906  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:53.286917  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:53.286979  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:53.322063  133241 cri.go:89] found id: ""
	I1210 01:11:53.322087  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.322096  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:53.322104  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:53.322175  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:53.353598  133241 cri.go:89] found id: ""
	I1210 01:11:53.353624  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.353632  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:53.353641  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:53.353653  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:53.400634  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:53.400660  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:53.412838  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:53.412870  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:53.475152  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:53.475176  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:53.475191  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:53.551193  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:53.551236  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:54.290077  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:56.290911  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:53.322201  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:55.821982  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:53.257982  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:55.756075  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:56.089703  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:56.102065  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:56.102158  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:56.137385  133241 cri.go:89] found id: ""
	I1210 01:11:56.137410  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.137418  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:56.137424  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:56.137489  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:56.173717  133241 cri.go:89] found id: ""
	I1210 01:11:56.173748  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.173756  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:56.173762  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:56.173823  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:56.209007  133241 cri.go:89] found id: ""
	I1210 01:11:56.209031  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.209038  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:56.209044  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:56.209106  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:56.247599  133241 cri.go:89] found id: ""
	I1210 01:11:56.247628  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.247636  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:56.247642  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:56.247701  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:56.279510  133241 cri.go:89] found id: ""
	I1210 01:11:56.279535  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.279544  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:56.279550  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:56.279600  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:56.311644  133241 cri.go:89] found id: ""
	I1210 01:11:56.311665  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.311672  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:56.311678  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:56.311722  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:56.343277  133241 cri.go:89] found id: ""
	I1210 01:11:56.343306  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.343317  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:56.343324  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:56.343384  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:56.396352  133241 cri.go:89] found id: ""
	I1210 01:11:56.396380  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.396388  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:56.396397  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:56.396409  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:56.408726  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:56.408754  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:56.483943  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:56.483970  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:56.483987  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:56.566841  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:56.566874  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:56.604048  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:56.604083  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:59.154979  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:59.167727  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:59.167803  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:59.198861  133241 cri.go:89] found id: ""
	I1210 01:11:59.198886  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.198894  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:59.198901  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:59.198953  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:59.232900  133241 cri.go:89] found id: ""
	I1210 01:11:59.232935  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.232947  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:59.232955  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:59.233024  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:59.267532  133241 cri.go:89] found id: ""
	I1210 01:11:59.267558  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.267566  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:59.267571  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:59.267633  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:59.298091  133241 cri.go:89] found id: ""
	I1210 01:11:59.298120  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.298130  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:59.298140  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:59.298199  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:59.327848  133241 cri.go:89] found id: ""
	I1210 01:11:59.327879  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.327889  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:59.327897  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:59.327957  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:59.356570  133241 cri.go:89] found id: ""
	I1210 01:11:59.356601  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.356617  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:59.356626  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:59.356686  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:59.387756  133241 cri.go:89] found id: ""
	I1210 01:11:59.387780  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.387788  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:59.387793  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:59.387843  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:59.419836  133241 cri.go:89] found id: ""
	I1210 01:11:59.419869  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.419878  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:59.419887  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:59.419902  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:59.469663  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:59.469697  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:59.482738  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:59.482768  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:59.548687  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:59.548717  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:59.548739  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:58.790282  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:01.290379  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:58.320794  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:00.821991  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:57.756197  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:00.256511  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:59.625772  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:59.625809  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:02.163527  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:02.175510  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:02.175569  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:02.209432  133241 cri.go:89] found id: ""
	I1210 01:12:02.209462  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.209474  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:02.209481  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:02.209535  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:02.241027  133241 cri.go:89] found id: ""
	I1210 01:12:02.241050  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.241059  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:02.241064  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:02.241113  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:02.272251  133241 cri.go:89] found id: ""
	I1210 01:12:02.272277  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.272286  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:02.272293  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:02.272355  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:02.305879  133241 cri.go:89] found id: ""
	I1210 01:12:02.305903  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.305913  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:02.305920  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:02.305978  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:02.339219  133241 cri.go:89] found id: ""
	I1210 01:12:02.339248  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.339263  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:02.339271  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:02.339333  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:02.375203  133241 cri.go:89] found id: ""
	I1210 01:12:02.375240  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.375252  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:02.375260  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:02.375326  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:02.406364  133241 cri.go:89] found id: ""
	I1210 01:12:02.406396  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.406406  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:02.406413  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:02.406472  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:02.441572  133241 cri.go:89] found id: ""
	I1210 01:12:02.441602  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.441614  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:02.441627  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:02.441642  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:02.454215  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:02.454241  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:02.526345  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:02.526368  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:02.526388  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:02.603813  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:02.603845  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:02.640102  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:02.640136  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:03.291135  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.792322  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:03.321084  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.322066  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:02.755961  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.256774  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.189319  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:05.201957  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:05.202022  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:05.242211  133241 cri.go:89] found id: ""
	I1210 01:12:05.242238  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.242247  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:05.242253  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:05.242300  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:05.277287  133241 cri.go:89] found id: ""
	I1210 01:12:05.277309  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.277317  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:05.277323  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:05.277382  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:05.309455  133241 cri.go:89] found id: ""
	I1210 01:12:05.309480  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.309488  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:05.309493  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:05.309540  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:05.344117  133241 cri.go:89] found id: ""
	I1210 01:12:05.344143  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.344156  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:05.344164  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:05.344222  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:05.375039  133241 cri.go:89] found id: ""
	I1210 01:12:05.375067  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.375079  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:05.375086  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:05.375146  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:05.407623  133241 cri.go:89] found id: ""
	I1210 01:12:05.407649  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.407657  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:05.407665  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:05.407723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:05.441018  133241 cri.go:89] found id: ""
	I1210 01:12:05.441047  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.441055  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:05.441061  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:05.441123  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:05.471864  133241 cri.go:89] found id: ""
	I1210 01:12:05.471895  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.471907  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:05.471918  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:05.471931  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:05.536855  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:05.536881  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:05.536896  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:05.617577  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:05.617612  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:05.654150  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:05.654188  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:05.707690  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:05.707720  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:08.220391  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:08.232904  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:08.232961  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:08.271892  133241 cri.go:89] found id: ""
	I1210 01:12:08.271921  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.271933  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:08.271939  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:08.272004  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:08.304534  133241 cri.go:89] found id: ""
	I1210 01:12:08.304556  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.304563  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:08.304569  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:08.304620  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:08.338410  133241 cri.go:89] found id: ""
	I1210 01:12:08.338441  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.338451  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:08.338459  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:08.338523  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:08.370412  133241 cri.go:89] found id: ""
	I1210 01:12:08.370438  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.370449  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:08.370456  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:08.370515  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:08.401137  133241 cri.go:89] found id: ""
	I1210 01:12:08.401161  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.401169  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:08.401175  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:08.401224  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:08.436185  133241 cri.go:89] found id: ""
	I1210 01:12:08.436220  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.436232  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:08.436241  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:08.436308  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:08.468648  133241 cri.go:89] found id: ""
	I1210 01:12:08.468677  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.468696  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:08.468704  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:08.468764  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:08.506817  133241 cri.go:89] found id: ""
	I1210 01:12:08.506852  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.506865  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:08.506878  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:08.506898  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:08.565209  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:08.565240  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:08.581630  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:08.581675  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:08.663163  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:08.663189  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:08.663201  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:08.744843  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:08.744888  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:08.290806  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:10.790414  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:07.821280  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:09.821710  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:07.755386  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:09.759064  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:12.256087  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:11.282449  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:11.295381  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:11.295443  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:11.328119  133241 cri.go:89] found id: ""
	I1210 01:12:11.328145  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.328156  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:11.328162  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:11.328215  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:11.360864  133241 cri.go:89] found id: ""
	I1210 01:12:11.360895  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.360906  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:11.360914  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:11.360979  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:11.394838  133241 cri.go:89] found id: ""
	I1210 01:12:11.394862  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.394871  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:11.394876  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:11.394928  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:11.424174  133241 cri.go:89] found id: ""
	I1210 01:12:11.424216  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.424228  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:11.424236  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:11.424295  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:11.455057  133241 cri.go:89] found id: ""
	I1210 01:12:11.455083  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.455095  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:11.455102  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:11.455173  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:11.485755  133241 cri.go:89] found id: ""
	I1210 01:12:11.485783  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.485791  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:11.485797  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:11.485850  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:11.516921  133241 cri.go:89] found id: ""
	I1210 01:12:11.516952  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.516963  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:11.516970  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:11.517029  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:11.547484  133241 cri.go:89] found id: ""
	I1210 01:12:11.547510  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.547518  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:11.547527  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:11.547540  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:11.582392  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:11.582419  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:11.635271  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:11.635306  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:11.647460  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:11.647492  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:11.713562  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:11.713584  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:11.713599  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:14.299112  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:14.314813  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:14.314886  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:14.365870  133241 cri.go:89] found id: ""
	I1210 01:12:14.365907  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.365925  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:14.365934  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:14.365998  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:14.399023  133241 cri.go:89] found id: ""
	I1210 01:12:14.399046  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.399054  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:14.399060  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:14.399106  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:14.432464  133241 cri.go:89] found id: ""
	I1210 01:12:14.432490  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.432498  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:14.432504  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:14.432559  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:14.462625  133241 cri.go:89] found id: ""
	I1210 01:12:14.462657  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.462668  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:14.462675  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:14.462723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:14.494853  133241 cri.go:89] found id: ""
	I1210 01:12:14.494884  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.494895  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:14.494903  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:14.494968  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:14.528863  133241 cri.go:89] found id: ""
	I1210 01:12:14.528898  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.528909  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:14.528917  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:14.528985  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:14.563527  133241 cri.go:89] found id: ""
	I1210 01:12:14.563557  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.563568  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:14.563575  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:14.563633  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:14.592383  133241 cri.go:89] found id: ""
	I1210 01:12:14.592419  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.592429  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:14.592440  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:14.592453  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:14.604471  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:14.604498  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:12:12.790681  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:15.289761  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:12.321375  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:14.321765  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:16.820568  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:14.256568  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:16.755323  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	W1210 01:12:14.671647  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:14.671673  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:14.671686  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:14.749619  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:14.749648  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:14.783668  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:14.783700  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:17.337203  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:17.349666  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:17.349726  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:17.380558  133241 cri.go:89] found id: ""
	I1210 01:12:17.380584  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.380595  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:17.380603  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:17.380663  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:17.413026  133241 cri.go:89] found id: ""
	I1210 01:12:17.413060  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.413072  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:17.413080  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:17.413149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:17.444972  133241 cri.go:89] found id: ""
	I1210 01:12:17.445003  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.445014  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:17.445022  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:17.445081  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:17.477555  133241 cri.go:89] found id: ""
	I1210 01:12:17.477580  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.477588  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:17.477594  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:17.477641  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:17.508550  133241 cri.go:89] found id: ""
	I1210 01:12:17.508574  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.508582  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:17.508588  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:17.508671  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:17.538537  133241 cri.go:89] found id: ""
	I1210 01:12:17.538574  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.538586  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:17.538594  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:17.538655  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:17.571816  133241 cri.go:89] found id: ""
	I1210 01:12:17.571843  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.571851  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:17.571859  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:17.571916  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:17.602437  133241 cri.go:89] found id: ""
	I1210 01:12:17.602465  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.602484  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:17.602502  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:17.602517  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:17.652904  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:17.652936  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:17.664983  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:17.665006  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:17.732580  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:17.732606  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:17.732622  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:17.813561  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:17.813598  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:17.290624  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:19.291031  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:21.790058  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:18.821021  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:20.821538  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:18.755611  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:20.756570  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:20.349846  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:20.361680  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:20.361816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:20.394316  133241 cri.go:89] found id: ""
	I1210 01:12:20.394338  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.394345  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:20.394350  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:20.394395  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:20.432172  133241 cri.go:89] found id: ""
	I1210 01:12:20.432196  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.432204  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:20.432209  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:20.432256  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:20.464019  133241 cri.go:89] found id: ""
	I1210 01:12:20.464042  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.464049  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:20.464055  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:20.464101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:20.496239  133241 cri.go:89] found id: ""
	I1210 01:12:20.496264  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.496271  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:20.496277  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:20.496325  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:20.527890  133241 cri.go:89] found id: ""
	I1210 01:12:20.527920  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.527932  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:20.527939  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:20.527996  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:20.558333  133241 cri.go:89] found id: ""
	I1210 01:12:20.558360  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.558368  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:20.558374  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:20.558425  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:20.589431  133241 cri.go:89] found id: ""
	I1210 01:12:20.589461  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.589472  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:20.589480  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:20.589542  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:20.618988  133241 cri.go:89] found id: ""
	I1210 01:12:20.619018  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.619032  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:20.619042  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:20.619056  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:20.669620  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:20.669648  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:20.681405  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:20.681428  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:20.745196  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:20.745226  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:20.745243  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:20.823522  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:20.823548  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:23.360499  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:23.373249  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:23.373315  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:23.405186  133241 cri.go:89] found id: ""
	I1210 01:12:23.405207  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.405215  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:23.405224  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:23.405269  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:23.440082  133241 cri.go:89] found id: ""
	I1210 01:12:23.440118  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.440138  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:23.440146  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:23.440217  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:23.473962  133241 cri.go:89] found id: ""
	I1210 01:12:23.473991  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.474001  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:23.474010  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:23.474066  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:23.505004  133241 cri.go:89] found id: ""
	I1210 01:12:23.505028  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.505036  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:23.505042  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:23.505087  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:23.539383  133241 cri.go:89] found id: ""
	I1210 01:12:23.539416  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.539427  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:23.539435  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:23.539502  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:23.569371  133241 cri.go:89] found id: ""
	I1210 01:12:23.569402  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.569412  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:23.569420  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:23.569487  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:23.599718  133241 cri.go:89] found id: ""
	I1210 01:12:23.599740  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.599748  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:23.599754  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:23.599798  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:23.633483  133241 cri.go:89] found id: ""
	I1210 01:12:23.633513  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.633527  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:23.633538  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:23.633572  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:23.645791  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:23.645814  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:23.706819  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:23.706842  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:23.706858  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:23.792257  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:23.792283  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:23.832356  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:23.832384  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:23.790991  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:26.289467  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:23.321221  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:25.321373  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:23.256427  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:25.256459  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:27.257652  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:26.383157  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:26.395778  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:26.395834  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:26.428709  133241 cri.go:89] found id: ""
	I1210 01:12:26.428738  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.428750  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:26.428758  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:26.428823  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:26.463421  133241 cri.go:89] found id: ""
	I1210 01:12:26.463451  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.463470  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:26.463479  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:26.463541  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:26.494783  133241 cri.go:89] found id: ""
	I1210 01:12:26.494813  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.494826  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:26.494834  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:26.494894  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:26.524395  133241 cri.go:89] found id: ""
	I1210 01:12:26.524423  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.524434  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:26.524442  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:26.524505  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:26.554102  133241 cri.go:89] found id: ""
	I1210 01:12:26.554135  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.554146  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:26.554153  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:26.554218  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:26.584091  133241 cri.go:89] found id: ""
	I1210 01:12:26.584119  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.584127  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:26.584133  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:26.584188  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:26.618194  133241 cri.go:89] found id: ""
	I1210 01:12:26.618221  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.618229  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:26.618234  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:26.618282  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:26.652597  133241 cri.go:89] found id: ""
	I1210 01:12:26.652632  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.652643  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:26.652657  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:26.652674  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:26.724236  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:26.724262  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:26.724277  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:26.802706  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:26.802745  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:26.851153  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:26.851184  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:26.902459  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:26.902489  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:29.415298  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:29.428093  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:29.428168  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:29.460651  133241 cri.go:89] found id: ""
	I1210 01:12:29.460678  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.460686  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:29.460692  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:29.460745  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:29.490971  133241 cri.go:89] found id: ""
	I1210 01:12:29.491000  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.491009  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:29.491015  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:29.491064  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:29.521465  133241 cri.go:89] found id: ""
	I1210 01:12:29.521496  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.521509  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:29.521517  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:29.521592  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:29.555709  133241 cri.go:89] found id: ""
	I1210 01:12:29.555736  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.555744  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:29.555750  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:29.555812  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:29.589891  133241 cri.go:89] found id: ""
	I1210 01:12:29.589918  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.589928  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:29.589935  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:29.590006  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:29.620929  133241 cri.go:89] found id: ""
	I1210 01:12:29.620959  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.620989  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:29.620998  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:29.621060  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:28.290708  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:30.290750  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:27.822436  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:30.320877  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:29.756698  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:31.756872  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:29.652297  133241 cri.go:89] found id: ""
	I1210 01:12:29.652322  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.652332  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:29.652339  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:29.652400  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:29.685881  133241 cri.go:89] found id: ""
	I1210 01:12:29.685904  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.685912  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:29.685922  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:29.685936  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:29.734856  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:29.734889  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:29.747270  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:29.747297  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:29.811253  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:29.811276  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:29.811292  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:29.888151  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:29.888187  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:32.425743  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:32.438647  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:32.438723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:32.477466  133241 cri.go:89] found id: ""
	I1210 01:12:32.477489  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.477498  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:32.477503  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:32.477553  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:32.509698  133241 cri.go:89] found id: ""
	I1210 01:12:32.509732  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.509746  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:32.509753  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:32.509811  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:32.540873  133241 cri.go:89] found id: ""
	I1210 01:12:32.540903  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.540911  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:32.540919  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:32.540981  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:32.571143  133241 cri.go:89] found id: ""
	I1210 01:12:32.571168  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.571179  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:32.571186  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:32.571253  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:32.604797  133241 cri.go:89] found id: ""
	I1210 01:12:32.604829  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.604839  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:32.604847  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:32.604902  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:32.640179  133241 cri.go:89] found id: ""
	I1210 01:12:32.640204  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.640212  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:32.640218  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:32.640265  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:32.671103  133241 cri.go:89] found id: ""
	I1210 01:12:32.671130  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.671138  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:32.671144  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:32.671195  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:32.709038  133241 cri.go:89] found id: ""
	I1210 01:12:32.709069  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.709080  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:32.709092  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:32.709107  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:32.764933  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:32.764963  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:32.777149  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:32.777172  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:32.842233  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:32.842256  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:32.842273  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:32.923533  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:32.923569  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:32.291302  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:34.790708  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:32.321782  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:34.821161  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:36.821244  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:34.256937  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:36.756894  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:35.462284  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:35.476392  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:35.476465  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:35.509483  133241 cri.go:89] found id: ""
	I1210 01:12:35.509507  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.509515  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:35.509521  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:35.509568  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:35.546324  133241 cri.go:89] found id: ""
	I1210 01:12:35.546357  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.546369  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:35.546385  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:35.546457  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:35.580578  133241 cri.go:89] found id: ""
	I1210 01:12:35.580608  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.580618  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:35.580626  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:35.580695  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:35.613220  133241 cri.go:89] found id: ""
	I1210 01:12:35.613245  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.613253  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:35.613259  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:35.613318  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:35.650713  133241 cri.go:89] found id: ""
	I1210 01:12:35.650741  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.650751  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:35.650757  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:35.650826  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:35.685084  133241 cri.go:89] found id: ""
	I1210 01:12:35.685121  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.685134  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:35.685141  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:35.685196  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:35.717092  133241 cri.go:89] found id: ""
	I1210 01:12:35.717118  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.717130  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:35.717141  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:35.717197  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:35.753691  133241 cri.go:89] found id: ""
	I1210 01:12:35.753722  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.753732  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:35.753751  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:35.753766  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:35.807280  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:35.807314  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:35.821862  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:35.821894  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:35.892640  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:35.892667  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:35.892684  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:35.967250  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:35.967291  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:38.505643  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:38.518703  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:38.518762  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:38.554866  133241 cri.go:89] found id: ""
	I1210 01:12:38.554904  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.554917  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:38.554926  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:38.554983  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:38.586725  133241 cri.go:89] found id: ""
	I1210 01:12:38.586757  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.586770  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:38.586779  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:38.586840  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:38.617766  133241 cri.go:89] found id: ""
	I1210 01:12:38.617791  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.617799  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:38.617804  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:38.617855  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:38.647743  133241 cri.go:89] found id: ""
	I1210 01:12:38.647770  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.647779  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:38.647785  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:38.647838  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:38.680523  133241 cri.go:89] found id: ""
	I1210 01:12:38.680553  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.680564  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:38.680572  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:38.680634  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:38.714271  133241 cri.go:89] found id: ""
	I1210 01:12:38.714299  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.714307  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:38.714314  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:38.714366  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:38.751180  133241 cri.go:89] found id: ""
	I1210 01:12:38.751213  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.751226  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:38.751235  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:38.751307  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:38.783754  133241 cri.go:89] found id: ""
	I1210 01:12:38.783778  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.783787  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:38.783796  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:38.783807  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:38.843285  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:38.843332  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:38.856901  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:38.856935  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:38.923720  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:38.923747  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:38.923764  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:39.002855  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:39.002898  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:37.290816  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:38.785325  132693 pod_ready.go:82] duration metric: took 4m0.000828619s for pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace to be "Ready" ...
	E1210 01:12:38.785348  132693 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 01:12:38.785371  132693 pod_ready.go:39] duration metric: took 4m7.530994938s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:12:38.785436  132693 kubeadm.go:597] duration metric: took 4m15.56153133s to restartPrimaryControlPlane
	W1210 01:12:38.785555  132693 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 01:12:38.785612  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:12:38.822192  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:41.321407  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:39.256018  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:41.256892  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:41.542152  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:41.556438  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:41.556517  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:41.587666  133241 cri.go:89] found id: ""
	I1210 01:12:41.587695  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.587706  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:41.587714  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:41.587772  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:41.620472  133241 cri.go:89] found id: ""
	I1210 01:12:41.620498  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.620506  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:41.620512  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:41.620568  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:41.653153  133241 cri.go:89] found id: ""
	I1210 01:12:41.653196  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.653209  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:41.653217  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:41.653275  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:41.685358  133241 cri.go:89] found id: ""
	I1210 01:12:41.685387  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.685395  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:41.685401  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:41.685459  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:41.715972  133241 cri.go:89] found id: ""
	I1210 01:12:41.715996  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.716004  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:41.716010  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:41.716058  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:41.750651  133241 cri.go:89] found id: ""
	I1210 01:12:41.750684  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.750695  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:41.750703  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:41.750781  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:41.788845  133241 cri.go:89] found id: ""
	I1210 01:12:41.788872  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.788882  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:41.788890  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:41.788953  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:41.821679  133241 cri.go:89] found id: ""
	I1210 01:12:41.821705  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.821716  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:41.821726  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:41.821741  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:41.873177  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:41.873207  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:41.885639  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:41.885663  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:41.954882  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:41.954906  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:41.954922  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:42.032868  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:42.032911  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:44.569896  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:44.582137  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:44.582239  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:44.613216  133241 cri.go:89] found id: ""
	I1210 01:12:44.613245  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.613255  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:44.613264  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:44.613326  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:43.820651  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:45.821203  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:43.755681  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:45.755860  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:44.642860  133241 cri.go:89] found id: ""
	I1210 01:12:44.642887  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.642897  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:44.642904  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:44.642961  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:44.675879  133241 cri.go:89] found id: ""
	I1210 01:12:44.675908  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.675920  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:44.675928  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:44.675992  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:44.705466  133241 cri.go:89] found id: ""
	I1210 01:12:44.705490  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.705499  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:44.705505  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:44.705552  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:44.740999  133241 cri.go:89] found id: ""
	I1210 01:12:44.741029  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.741038  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:44.741043  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:44.741101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:44.774933  133241 cri.go:89] found id: ""
	I1210 01:12:44.774963  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.774974  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:44.774981  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:44.775044  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:44.806061  133241 cri.go:89] found id: ""
	I1210 01:12:44.806085  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.806093  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:44.806100  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:44.806163  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:44.837759  133241 cri.go:89] found id: ""
	I1210 01:12:44.837781  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.837789  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:44.837797  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:44.837808  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:44.872830  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:44.872881  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:44.925476  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:44.925505  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:44.937814  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:44.937838  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:45.012002  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:45.012029  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:45.012046  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:47.589735  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:47.603668  133241 kubeadm.go:597] duration metric: took 4m3.306612686s to restartPrimaryControlPlane
	W1210 01:12:47.603739  133241 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 01:12:47.603761  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:12:48.154198  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:12:48.167608  133241 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:12:48.176803  133241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:12:48.185508  133241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:12:48.185527  133241 kubeadm.go:157] found existing configuration files:
	
	I1210 01:12:48.185572  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:12:48.193940  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:12:48.193992  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:12:48.202384  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:12:48.210626  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:12:48.210672  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:12:48.219377  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:12:48.227459  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:12:48.227493  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:12:48.235967  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:12:48.244142  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:12:48.244177  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:12:48.252961  133241 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:12:48.323011  133241 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 01:12:48.323104  133241 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:12:48.458259  133241 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:12:48.458424  133241 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:12:48.458536  133241 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 01:12:48.630626  133241 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:12:48.632393  133241 out.go:235]   - Generating certificates and keys ...
	I1210 01:12:48.632510  133241 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:12:48.632611  133241 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:12:48.633714  133241 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:12:48.633800  133241 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:12:48.633862  133241 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:12:48.633957  133241 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:12:48.634058  133241 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:12:48.634151  133241 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:12:48.634265  133241 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:12:48.634426  133241 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:12:48.634546  133241 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:12:48.634640  133241 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:12:48.756866  133241 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:12:48.885589  133241 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:12:49.551602  133241 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:12:49.667812  133241 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:12:49.683125  133241 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:12:49.684322  133241 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:12:49.684390  133241 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:12:49.830086  133241 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:12:48.322646  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:50.821218  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:47.756532  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:49.757416  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:52.256110  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:49.831618  133241 out.go:235]   - Booting up control plane ...
	I1210 01:12:49.831733  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:12:49.836164  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:12:49.837117  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:12:49.845538  133241 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:12:49.848331  133241 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 01:12:53.320607  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:55.321218  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:54.256922  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:56.755279  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:57.321409  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:59.321826  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:01.821159  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:58.757281  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:01.256065  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:05.297959  132693 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.512320802s)
	I1210 01:13:05.298031  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:13:05.321593  132693 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:13:05.334072  132693 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:13:05.346063  132693 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:13:05.346089  132693 kubeadm.go:157] found existing configuration files:
	
	I1210 01:13:05.346143  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:13:05.360019  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:13:05.360087  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:13:05.372583  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:13:05.384130  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:13:05.384188  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:13:05.392629  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:13:05.400642  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:13:05.400700  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:13:05.410803  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:13:05.419350  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:13:05.419390  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:13:05.429452  132693 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:13:05.481014  132693 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 01:13:05.481092  132693 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:13:05.597528  132693 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:13:05.597654  132693 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:13:05.597756  132693 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 01:13:05.612251  132693 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:13:05.613988  132693 out.go:235]   - Generating certificates and keys ...
	I1210 01:13:05.614052  132693 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:13:05.614111  132693 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:13:05.614207  132693 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:13:05.614297  132693 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:13:05.614409  132693 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:13:05.614477  132693 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:13:05.614568  132693 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:13:05.614645  132693 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:13:05.614739  132693 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:13:05.614860  132693 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:13:05.614923  132693 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:13:05.615007  132693 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:13:05.946241  132693 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:13:06.262996  132693 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 01:13:06.492684  132693 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:13:06.618787  132693 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:13:06.805590  132693 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:13:06.806311  132693 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:13:06.808813  132693 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:13:06.810481  132693 out.go:235]   - Booting up control plane ...
	I1210 01:13:06.810631  132693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:13:06.810746  132693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:13:06.810812  132693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:13:03.821406  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:05.821749  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:03.756325  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:06.257324  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:06.832919  132693 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:13:06.839052  132693 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:13:06.839096  132693 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:13:06.969474  132693 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 01:13:06.969623  132693 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 01:13:07.971413  132693 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001911774s
	I1210 01:13:07.971493  132693 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 01:13:07.822174  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:09.822828  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:12.473566  132693 kubeadm.go:310] [api-check] The API server is healthy after 4.502020736s
	I1210 01:13:12.487877  132693 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 01:13:12.501570  132693 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 01:13:12.529568  132693 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 01:13:12.529808  132693 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-274758 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 01:13:12.539578  132693 kubeadm.go:310] [bootstrap-token] Using token: tq1yzs.mz19z1mkmh869v39
	I1210 01:13:08.757580  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:11.256597  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:12.540687  132693 out.go:235]   - Configuring RBAC rules ...
	I1210 01:13:12.540830  132693 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 01:13:12.546018  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 01:13:12.554335  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 01:13:12.557480  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 01:13:12.562006  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 01:13:12.568058  132693 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 01:13:12.880502  132693 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 01:13:13.367386  132693 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 01:13:13.879413  132693 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 01:13:13.880417  132693 kubeadm.go:310] 
	I1210 01:13:13.880519  132693 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 01:13:13.880541  132693 kubeadm.go:310] 
	I1210 01:13:13.880619  132693 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 01:13:13.880629  132693 kubeadm.go:310] 
	I1210 01:13:13.880662  132693 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 01:13:13.880741  132693 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 01:13:13.880829  132693 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 01:13:13.880851  132693 kubeadm.go:310] 
	I1210 01:13:13.880930  132693 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 01:13:13.880943  132693 kubeadm.go:310] 
	I1210 01:13:13.881016  132693 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 01:13:13.881029  132693 kubeadm.go:310] 
	I1210 01:13:13.881114  132693 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 01:13:13.881255  132693 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 01:13:13.881326  132693 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 01:13:13.881334  132693 kubeadm.go:310] 
	I1210 01:13:13.881429  132693 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 01:13:13.881542  132693 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 01:13:13.881553  132693 kubeadm.go:310] 
	I1210 01:13:13.881680  132693 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tq1yzs.mz19z1mkmh869v39 \
	I1210 01:13:13.881815  132693 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 \
	I1210 01:13:13.881843  132693 kubeadm.go:310] 	--control-plane 
	I1210 01:13:13.881854  132693 kubeadm.go:310] 
	I1210 01:13:13.881973  132693 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 01:13:13.881982  132693 kubeadm.go:310] 
	I1210 01:13:13.882072  132693 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tq1yzs.mz19z1mkmh869v39 \
	I1210 01:13:13.882230  132693 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 
	I1210 01:13:13.883146  132693 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:13:13.883196  132693 cni.go:84] Creating CNI manager for ""
	I1210 01:13:13.883217  132693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:13:13.885371  132693 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:13:13.886543  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:13:13.897482  132693 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:13:13.915107  132693 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 01:13:13.915244  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:13.915242  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-274758 minikube.k8s.io/updated_at=2024_12_10T01_13_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=embed-certs-274758 minikube.k8s.io/primary=true
	I1210 01:13:13.928635  132693 ops.go:34] apiserver oom_adj: -16
	I1210 01:13:14.131983  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:14.633015  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:15.132113  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:15.632347  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:16.132367  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:16.632749  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:12.321479  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:14.321663  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:16.822120  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:13.756549  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:15.751204  133282 pod_ready.go:82] duration metric: took 4m0.000700419s for pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace to be "Ready" ...
	E1210 01:13:15.751234  133282 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 01:13:15.751259  133282 pod_ready.go:39] duration metric: took 4m6.019142998s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:15.751290  133282 kubeadm.go:597] duration metric: took 4m13.842336769s to restartPrimaryControlPlane
	W1210 01:13:15.751381  133282 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 01:13:15.751413  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:13:17.132359  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:17.632050  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:18.132263  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:18.225462  132693 kubeadm.go:1113] duration metric: took 4.310260508s to wait for elevateKubeSystemPrivileges
	I1210 01:13:18.225504  132693 kubeadm.go:394] duration metric: took 4m55.046897812s to StartCluster
	I1210 01:13:18.225547  132693 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:18.225650  132693 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:13:18.227523  132693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:18.227776  132693 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 01:13:18.227852  132693 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 01:13:18.227928  132693 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-274758"
	I1210 01:13:18.227962  132693 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-274758"
	I1210 01:13:18.227961  132693 addons.go:69] Setting default-storageclass=true in profile "embed-certs-274758"
	I1210 01:13:18.227999  132693 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-274758"
	I1210 01:13:18.228012  132693 config.go:182] Loaded profile config "embed-certs-274758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1210 01:13:18.227973  132693 addons.go:243] addon storage-provisioner should already be in state true
	I1210 01:13:18.227983  132693 addons.go:69] Setting metrics-server=true in profile "embed-certs-274758"
	I1210 01:13:18.228079  132693 addons.go:234] Setting addon metrics-server=true in "embed-certs-274758"
	W1210 01:13:18.228096  132693 addons.go:243] addon metrics-server should already be in state true
	I1210 01:13:18.228130  132693 host.go:66] Checking if "embed-certs-274758" exists ...
	I1210 01:13:18.228085  132693 host.go:66] Checking if "embed-certs-274758" exists ...
	I1210 01:13:18.228468  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.228508  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.228521  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.228554  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.228608  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.228660  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.229260  132693 out.go:177] * Verifying Kubernetes components...
	I1210 01:13:18.230643  132693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:13:18.244916  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45913
	I1210 01:13:18.245098  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I1210 01:13:18.245389  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.245571  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.246186  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.246210  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.246288  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43885
	I1210 01:13:18.246344  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.246364  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.246598  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.246769  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.246771  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.246825  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.247215  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.247242  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.247367  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.247418  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.247638  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.248206  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.248244  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.250542  132693 addons.go:234] Setting addon default-storageclass=true in "embed-certs-274758"
	W1210 01:13:18.250579  132693 addons.go:243] addon default-storageclass should already be in state true
	I1210 01:13:18.250614  132693 host.go:66] Checking if "embed-certs-274758" exists ...
	I1210 01:13:18.250951  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.250999  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.265194  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36375
	I1210 01:13:18.265779  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43577
	I1210 01:13:18.266283  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.266478  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.267212  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.267234  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.267302  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40845
	I1210 01:13:18.267316  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.267329  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.267647  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.267700  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.268228  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.268248  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.268250  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.268276  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.268319  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.268679  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.268889  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.269065  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.271273  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:13:18.271495  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:13:18.272879  132693 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:13:18.272898  132693 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 01:13:18.274238  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 01:13:18.274260  132693 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 01:13:18.274279  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:13:18.274371  132693 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:18.274394  132693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 01:13:18.274411  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:13:18.278685  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.279199  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:13:18.279245  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.279405  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:13:18.279557  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:13:18.279684  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:13:18.279823  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:13:18.280345  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.281064  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:13:18.281083  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:13:18.281095  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.281282  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:13:18.281455  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:13:18.281643  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:13:18.285915  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36787
	I1210 01:13:18.286306  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.286727  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.286745  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.287055  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.287234  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.288732  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:13:18.288930  132693 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:18.288945  132693 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 01:13:18.288962  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:13:18.291528  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.291801  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:13:18.291821  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.291990  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:13:18.292175  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:13:18.292303  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:13:18.292532  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:13:18.426704  132693 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:13:18.454857  132693 node_ready.go:35] waiting up to 6m0s for node "embed-certs-274758" to be "Ready" ...
	I1210 01:13:18.470552  132693 node_ready.go:49] node "embed-certs-274758" has status "Ready":"True"
	I1210 01:13:18.470590  132693 node_ready.go:38] duration metric: took 15.702625ms for node "embed-certs-274758" to be "Ready" ...
	I1210 01:13:18.470604  132693 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:18.480748  132693 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bgjgh" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:18.569014  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 01:13:18.569040  132693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 01:13:18.605108  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 01:13:18.605137  132693 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 01:13:18.606158  132693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:18.614827  132693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:18.647542  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:18.647573  132693 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 01:13:18.726060  132693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:19.536876  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.536905  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.536988  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.537020  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.537177  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537215  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.537223  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.537234  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.537239  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537252  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.537261  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.537269  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.537324  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.537524  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537623  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.537922  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.537957  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537981  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.556234  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.556255  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.556555  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.556567  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.556572  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.977786  132693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.251679295s)
	I1210 01:13:19.977848  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.977861  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.978210  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.978227  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.978253  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.978288  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.978297  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.978536  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.978557  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.978581  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.978593  132693 addons.go:475] Verifying addon metrics-server=true in "embed-certs-274758"
	I1210 01:13:19.980096  132693 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 01:13:19.981147  132693 addons.go:510] duration metric: took 1.753302974s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 01:13:20.487221  132693 pod_ready.go:93] pod "coredns-7c65d6cfc9-bgjgh" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:20.487244  132693 pod_ready.go:82] duration metric: took 2.006464893s for pod "coredns-7c65d6cfc9-bgjgh" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:20.487253  132693 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:18.822687  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:21.322845  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:22.493358  132693 pod_ready.go:103] pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:24.993203  132693 pod_ready.go:103] pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:25.492646  132693 pod_ready.go:93] pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.492669  132693 pod_ready.go:82] duration metric: took 5.005410057s for pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.492679  132693 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.497102  132693 pod_ready.go:93] pod "etcd-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.497119  132693 pod_ready.go:82] duration metric: took 4.434391ms for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.497126  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.501166  132693 pod_ready.go:93] pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.501181  132693 pod_ready.go:82] duration metric: took 4.048875ms for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.501189  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.505541  132693 pod_ready.go:93] pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.505565  132693 pod_ready.go:82] duration metric: took 4.369889ms for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.505579  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-v28mz" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.509548  132693 pod_ready.go:93] pod "kube-proxy-v28mz" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.509562  132693 pod_ready.go:82] duration metric: took 3.977138ms for pod "kube-proxy-v28mz" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.509568  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:23.322966  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:25.820854  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:27.517005  132693 pod_ready.go:93] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:27.517027  132693 pod_ready.go:82] duration metric: took 2.007452032s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:27.517035  132693 pod_ready.go:39] duration metric: took 9.046411107s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:27.517052  132693 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:13:27.517101  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:13:27.531721  132693 api_server.go:72] duration metric: took 9.303907779s to wait for apiserver process to appear ...
	I1210 01:13:27.531750  132693 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:13:27.531768  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:13:27.536509  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 200:
	ok
	I1210 01:13:27.537428  132693 api_server.go:141] control plane version: v1.31.2
	I1210 01:13:27.537448  132693 api_server.go:131] duration metric: took 5.691563ms to wait for apiserver health ...
	I1210 01:13:27.537462  132693 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:13:27.693218  132693 system_pods.go:59] 9 kube-system pods found
	I1210 01:13:27.693251  132693 system_pods.go:61] "coredns-7c65d6cfc9-bgjgh" [277d23ef-ff20-414d-beb6-c6982712a423] Running
	I1210 01:13:27.693257  132693 system_pods.go:61] "coredns-7c65d6cfc9-m4qgb" [41253d1b-c010-41e2-9286-e9930025e9ff] Running
	I1210 01:13:27.693265  132693 system_pods.go:61] "etcd-embed-certs-274758" [b0c51f0d-a638-4f4f-b3df-95c7794facc9] Running
	I1210 01:13:27.693269  132693 system_pods.go:61] "kube-apiserver-embed-certs-274758" [ecdc5ff5-4d07-4802-98b8-8382132f7748] Running
	I1210 01:13:27.693273  132693 system_pods.go:61] "kube-controller-manager-embed-certs-274758" [e42fce81-4a93-498b-8897-31387873d181] Running
	I1210 01:13:27.693276  132693 system_pods.go:61] "kube-proxy-v28mz" [5cd47cc1-a085-4e77-850d-dde0c8ed6054] Running
	I1210 01:13:27.693279  132693 system_pods.go:61] "kube-scheduler-embed-certs-274758" [609e1329-cd71-4772-96cc-24b3620c511d] Running
	I1210 01:13:27.693285  132693 system_pods.go:61] "metrics-server-6867b74b74-mcw2c" [a7b75933-124c-4577-b26a-ad1c5c128910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:13:27.693289  132693 system_pods.go:61] "storage-provisioner" [71e4d38f-b0fe-43cf-a844-ba787287fda6] Running
	I1210 01:13:27.693296  132693 system_pods.go:74] duration metric: took 155.828167ms to wait for pod list to return data ...
	I1210 01:13:27.693305  132693 default_sa.go:34] waiting for default service account to be created ...
	I1210 01:13:27.891018  132693 default_sa.go:45] found service account: "default"
	I1210 01:13:27.891046  132693 default_sa.go:55] duration metric: took 197.731166ms for default service account to be created ...
	I1210 01:13:27.891055  132693 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 01:13:28.095967  132693 system_pods.go:86] 9 kube-system pods found
	I1210 01:13:28.095996  132693 system_pods.go:89] "coredns-7c65d6cfc9-bgjgh" [277d23ef-ff20-414d-beb6-c6982712a423] Running
	I1210 01:13:28.096002  132693 system_pods.go:89] "coredns-7c65d6cfc9-m4qgb" [41253d1b-c010-41e2-9286-e9930025e9ff] Running
	I1210 01:13:28.096006  132693 system_pods.go:89] "etcd-embed-certs-274758" [b0c51f0d-a638-4f4f-b3df-95c7794facc9] Running
	I1210 01:13:28.096010  132693 system_pods.go:89] "kube-apiserver-embed-certs-274758" [ecdc5ff5-4d07-4802-98b8-8382132f7748] Running
	I1210 01:13:28.096014  132693 system_pods.go:89] "kube-controller-manager-embed-certs-274758" [e42fce81-4a93-498b-8897-31387873d181] Running
	I1210 01:13:28.096017  132693 system_pods.go:89] "kube-proxy-v28mz" [5cd47cc1-a085-4e77-850d-dde0c8ed6054] Running
	I1210 01:13:28.096021  132693 system_pods.go:89] "kube-scheduler-embed-certs-274758" [609e1329-cd71-4772-96cc-24b3620c511d] Running
	I1210 01:13:28.096027  132693 system_pods.go:89] "metrics-server-6867b74b74-mcw2c" [a7b75933-124c-4577-b26a-ad1c5c128910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:13:28.096031  132693 system_pods.go:89] "storage-provisioner" [71e4d38f-b0fe-43cf-a844-ba787287fda6] Running
	I1210 01:13:28.096039  132693 system_pods.go:126] duration metric: took 204.97831ms to wait for k8s-apps to be running ...
	I1210 01:13:28.096047  132693 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 01:13:28.096091  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:13:28.109766  132693 system_svc.go:56] duration metric: took 13.710817ms WaitForService to wait for kubelet
	I1210 01:13:28.109807  132693 kubeadm.go:582] duration metric: took 9.881998931s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:13:28.109831  132693 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:13:28.290402  132693 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:13:28.290444  132693 node_conditions.go:123] node cpu capacity is 2
	I1210 01:13:28.290457  132693 node_conditions.go:105] duration metric: took 180.620817ms to run NodePressure ...
	I1210 01:13:28.290472  132693 start.go:241] waiting for startup goroutines ...
	I1210 01:13:28.290478  132693 start.go:246] waiting for cluster config update ...
	I1210 01:13:28.290489  132693 start.go:255] writing updated cluster config ...
	I1210 01:13:28.290756  132693 ssh_runner.go:195] Run: rm -f paused
	I1210 01:13:28.341573  132693 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 01:13:28.343695  132693 out.go:177] * Done! kubectl is now configured to use "embed-certs-274758" cluster and "default" namespace by default
	I1210 01:13:28.321957  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:30.821091  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:29.849672  133241 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 01:13:29.850163  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:13:29.850412  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:13:33.322460  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:35.822120  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:34.850843  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:13:34.851064  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:13:38.321590  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:40.322421  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:41.903973  133282 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.152536348s)
	I1210 01:13:41.904058  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:13:41.922104  133282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:13:41.932781  133282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:13:41.949147  133282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:13:41.949169  133282 kubeadm.go:157] found existing configuration files:
	
	I1210 01:13:41.949234  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 01:13:41.961475  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:13:41.961531  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:13:41.973790  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 01:13:41.985658  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:13:41.985718  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:13:41.996851  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 01:13:42.005612  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:13:42.005661  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:13:42.016316  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 01:13:42.025097  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:13:42.025162  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:13:42.035841  133282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:13:42.204343  133282 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:13:42.820637  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:44.821863  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:46.822010  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:44.851525  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:13:44.851699  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:13:50.610797  133282 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 01:13:50.610879  133282 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:13:50.610976  133282 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:13:50.611138  133282 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:13:50.611235  133282 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 01:13:50.611363  133282 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:13:50.612870  133282 out.go:235]   - Generating certificates and keys ...
	I1210 01:13:50.612937  133282 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:13:50.612990  133282 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:13:50.613065  133282 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:13:50.613142  133282 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:13:50.613213  133282 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:13:50.613291  133282 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:13:50.613383  133282 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:13:50.613468  133282 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:13:50.613583  133282 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:13:50.613711  133282 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:13:50.613784  133282 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:13:50.613871  133282 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:13:50.613951  133282 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:13:50.614035  133282 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 01:13:50.614113  133282 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:13:50.614231  133282 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:13:50.614318  133282 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:13:50.614396  133282 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:13:50.614483  133282 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:13:50.615840  133282 out.go:235]   - Booting up control plane ...
	I1210 01:13:50.615917  133282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:13:50.615985  133282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:13:50.616068  133282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:13:50.616186  133282 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:13:50.616283  133282 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:13:50.616354  133282 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:13:50.616529  133282 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 01:13:50.616677  133282 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 01:13:50.616752  133282 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002388771s
	I1210 01:13:50.616858  133282 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 01:13:50.616942  133282 kubeadm.go:310] [api-check] The API server is healthy after 4.501731998s
	I1210 01:13:50.617063  133282 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 01:13:50.617214  133282 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 01:13:50.617302  133282 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 01:13:50.617556  133282 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-901295 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 01:13:50.617633  133282 kubeadm.go:310] [bootstrap-token] Using token: qm0b8q.vohlzpntqihfsj2x
	I1210 01:13:50.618774  133282 out.go:235]   - Configuring RBAC rules ...
	I1210 01:13:50.618896  133282 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 01:13:50.619001  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 01:13:50.619167  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 01:13:50.619286  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 01:13:50.619432  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 01:13:50.619563  133282 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 01:13:50.619724  133282 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 01:13:50.619788  133282 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 01:13:50.619855  133282 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 01:13:50.619865  133282 kubeadm.go:310] 
	I1210 01:13:50.619958  133282 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 01:13:50.619970  133282 kubeadm.go:310] 
	I1210 01:13:50.620071  133282 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 01:13:50.620084  133282 kubeadm.go:310] 
	I1210 01:13:50.620133  133282 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 01:13:50.620214  133282 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 01:13:50.620290  133282 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 01:13:50.620299  133282 kubeadm.go:310] 
	I1210 01:13:50.620384  133282 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 01:13:50.620393  133282 kubeadm.go:310] 
	I1210 01:13:50.620464  133282 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 01:13:50.620480  133282 kubeadm.go:310] 
	I1210 01:13:50.620554  133282 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 01:13:50.620656  133282 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 01:13:50.620747  133282 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 01:13:50.620756  133282 kubeadm.go:310] 
	I1210 01:13:50.620862  133282 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 01:13:50.620978  133282 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 01:13:50.620994  133282 kubeadm.go:310] 
	I1210 01:13:50.621111  133282 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token qm0b8q.vohlzpntqihfsj2x \
	I1210 01:13:50.621255  133282 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 \
	I1210 01:13:50.621286  133282 kubeadm.go:310] 	--control-plane 
	I1210 01:13:50.621296  133282 kubeadm.go:310] 
	I1210 01:13:50.621365  133282 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 01:13:50.621374  133282 kubeadm.go:310] 
	I1210 01:13:50.621448  133282 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token qm0b8q.vohlzpntqihfsj2x \
	I1210 01:13:50.621569  133282 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 
	I1210 01:13:50.621593  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:13:50.621608  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:13:50.622943  133282 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:13:49.321854  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:51.815742  132605 pod_ready.go:82] duration metric: took 4m0.000382174s for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	E1210 01:13:51.815774  132605 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1210 01:13:51.815787  132605 pod_ready.go:39] duration metric: took 4m2.800798949s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:51.815811  132605 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:13:51.815854  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:13:51.815920  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:13:51.865972  132605 cri.go:89] found id: "0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:51.866004  132605 cri.go:89] found id: ""
	I1210 01:13:51.866015  132605 logs.go:282] 1 containers: [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c]
	I1210 01:13:51.866098  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.871589  132605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:13:51.871648  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:13:51.909231  132605 cri.go:89] found id: "bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:51.909256  132605 cri.go:89] found id: ""
	I1210 01:13:51.909266  132605 logs.go:282] 1 containers: [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490]
	I1210 01:13:51.909321  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.913562  132605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:13:51.913639  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:13:51.946623  132605 cri.go:89] found id: "7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:51.946651  132605 cri.go:89] found id: ""
	I1210 01:13:51.946661  132605 logs.go:282] 1 containers: [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04]
	I1210 01:13:51.946721  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.950686  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:13:51.950756  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:13:51.988821  132605 cri.go:89] found id: "c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:51.988845  132605 cri.go:89] found id: ""
	I1210 01:13:51.988856  132605 logs.go:282] 1 containers: [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692]
	I1210 01:13:51.988916  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.992776  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:13:51.992827  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:13:52.028882  132605 cri.go:89] found id: "eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:52.028910  132605 cri.go:89] found id: ""
	I1210 01:13:52.028920  132605 logs.go:282] 1 containers: [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77]
	I1210 01:13:52.028974  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.033384  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:13:52.033467  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:13:52.068002  132605 cri.go:89] found id: "7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:52.068030  132605 cri.go:89] found id: ""
	I1210 01:13:52.068038  132605 logs.go:282] 1 containers: [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f]
	I1210 01:13:52.068086  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.071868  132605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:13:52.071938  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:13:52.105726  132605 cri.go:89] found id: ""
	I1210 01:13:52.105751  132605 logs.go:282] 0 containers: []
	W1210 01:13:52.105760  132605 logs.go:284] No container was found matching "kindnet"
	I1210 01:13:52.105767  132605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 01:13:52.105822  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 01:13:52.146662  132605 cri.go:89] found id: "8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:52.146690  132605 cri.go:89] found id: "abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:52.146696  132605 cri.go:89] found id: ""
	I1210 01:13:52.146706  132605 logs.go:282] 2 containers: [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0]
	I1210 01:13:52.146769  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.150459  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.153921  132605 logs.go:123] Gathering logs for kube-proxy [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77] ...
	I1210 01:13:52.153942  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:52.197327  132605 logs.go:123] Gathering logs for kube-controller-manager [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f] ...
	I1210 01:13:52.197354  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:50.624049  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:13:50.634300  133282 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:13:50.650835  133282 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 01:13:50.650955  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:50.650957  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-901295 minikube.k8s.io/updated_at=2024_12_10T01_13_50_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=default-k8s-diff-port-901295 minikube.k8s.io/primary=true
	I1210 01:13:50.661855  133282 ops.go:34] apiserver oom_adj: -16
	I1210 01:13:50.846244  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:51.347288  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:51.846690  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:52.346721  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:52.846891  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:53.346360  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:53.846284  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:54.346480  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:54.846394  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:54.950848  133282 kubeadm.go:1113] duration metric: took 4.299939675s to wait for elevateKubeSystemPrivileges
	I1210 01:13:54.950893  133282 kubeadm.go:394] duration metric: took 4m53.095365109s to StartCluster
	I1210 01:13:54.950920  133282 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:54.951018  133282 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:13:54.952642  133282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:54.952903  133282 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 01:13:54.953028  133282 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 01:13:54.953103  133282 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-901295"
	I1210 01:13:54.953122  133282 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-901295"
	W1210 01:13:54.953130  133282 addons.go:243] addon storage-provisioner should already be in state true
	I1210 01:13:54.953144  133282 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-901295"
	I1210 01:13:54.953165  133282 host.go:66] Checking if "default-k8s-diff-port-901295" exists ...
	I1210 01:13:54.953165  133282 config.go:182] Loaded profile config "default-k8s-diff-port-901295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:13:54.953164  133282 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-901295"
	I1210 01:13:54.953175  133282 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-901295"
	I1210 01:13:54.953188  133282 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-901295"
	W1210 01:13:54.953197  133282 addons.go:243] addon metrics-server should already be in state true
	I1210 01:13:54.953236  133282 host.go:66] Checking if "default-k8s-diff-port-901295" exists ...
	I1210 01:13:54.953502  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.953544  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.953604  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.953648  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.953611  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.953720  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.954470  133282 out.go:177] * Verifying Kubernetes components...
	I1210 01:13:54.955825  133282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:13:54.969471  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43591
	I1210 01:13:54.969539  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42601
	I1210 01:13:54.969905  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.969971  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.970407  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.970427  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.970539  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.970606  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.970834  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.970902  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.971282  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.971314  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.971457  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.971503  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.971615  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33149
	I1210 01:13:54.971975  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.972424  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.972451  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.972757  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.972939  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:54.976290  133282 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-901295"
	W1210 01:13:54.976313  133282 addons.go:243] addon default-storageclass should already be in state true
	I1210 01:13:54.976344  133282 host.go:66] Checking if "default-k8s-diff-port-901295" exists ...
	I1210 01:13:54.976701  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.976743  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.987931  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35367
	I1210 01:13:54.988409  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.988950  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.988975  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.989395  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.989602  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:54.990179  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35279
	I1210 01:13:54.990660  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.991231  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.991256  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.991553  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:13:54.991804  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.991988  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:54.993375  133282 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 01:13:54.993895  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:13:54.993895  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38115
	I1210 01:13:54.994363  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.994661  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 01:13:54.994675  133282 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 01:13:54.994690  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:13:54.994864  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.994882  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.995298  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.995379  133282 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:13:54.995834  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.995881  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.996682  133282 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:54.996704  133282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 01:13:54.996721  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:13:55.000015  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000319  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:13:55.000343  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000361  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000321  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:13:55.000540  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:13:55.000637  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:13:55.000658  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000689  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:13:55.000819  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:13:55.000955  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:13:55.001529  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:13:55.001896  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:13:55.002167  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:13:55.013310  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45905
	I1210 01:13:55.013700  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:55.014199  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:55.014219  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:55.014556  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:55.014997  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:55.016445  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:13:55.016626  133282 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:55.016642  133282 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 01:13:55.016659  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:13:55.018941  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.019337  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:13:55.019358  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.019578  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:13:55.019718  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:13:55.019807  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:13:55.019887  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:13:55.152197  133282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:13:55.175962  133282 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-901295" to be "Ready" ...
	I1210 01:13:55.185748  133282 node_ready.go:49] node "default-k8s-diff-port-901295" has status "Ready":"True"
	I1210 01:13:55.185767  133282 node_ready.go:38] duration metric: took 9.765238ms for node "default-k8s-diff-port-901295" to be "Ready" ...
	I1210 01:13:55.185776  133282 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:55.193102  133282 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:55.268186  133282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:55.294420  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 01:13:55.294451  133282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 01:13:55.326324  133282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:55.338979  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 01:13:55.339009  133282 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 01:13:55.393682  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:55.393713  133282 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 01:13:55.482637  133282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:56.131482  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.131574  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.131524  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.131650  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.132095  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Closing plugin on server side
	I1210 01:13:56.132112  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132129  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.132133  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132138  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.132140  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.132148  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.132149  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.132207  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.132384  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132397  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.132501  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Closing plugin on server side
	I1210 01:13:56.132565  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132579  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.155188  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.155211  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.155515  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.155535  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.795811  133282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.313113399s)
	I1210 01:13:56.795879  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.795895  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.796326  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Closing plugin on server side
	I1210 01:13:56.796327  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.796353  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.796367  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.796379  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.796612  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.796628  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.796641  133282 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-901295"
	I1210 01:13:56.798189  133282 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 01:13:52.256305  132605 logs.go:123] Gathering logs for dmesg ...
	I1210 01:13:52.256333  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:13:52.269263  132605 logs.go:123] Gathering logs for kube-apiserver [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c] ...
	I1210 01:13:52.269288  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:52.310821  132605 logs.go:123] Gathering logs for coredns [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04] ...
	I1210 01:13:52.310855  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:52.348176  132605 logs.go:123] Gathering logs for kube-scheduler [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692] ...
	I1210 01:13:52.348204  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:52.399357  132605 logs.go:123] Gathering logs for storage-provisioner [abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0] ...
	I1210 01:13:52.399392  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:52.436240  132605 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:13:52.436272  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:13:52.962153  132605 logs.go:123] Gathering logs for container status ...
	I1210 01:13:52.962192  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:13:53.010091  132605 logs.go:123] Gathering logs for kubelet ...
	I1210 01:13:53.010127  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:13:53.082183  132605 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:13:53.082218  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:13:53.201521  132605 logs.go:123] Gathering logs for etcd [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490] ...
	I1210 01:13:53.201557  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:53.243675  132605 logs.go:123] Gathering logs for storage-provisioner [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6] ...
	I1210 01:13:53.243711  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:55.779907  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:13:55.796284  132605 api_server.go:72] duration metric: took 4m14.500959712s to wait for apiserver process to appear ...
	I1210 01:13:55.796314  132605 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:13:55.796358  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:13:55.796431  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:13:55.839067  132605 cri.go:89] found id: "0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:55.839098  132605 cri.go:89] found id: ""
	I1210 01:13:55.839107  132605 logs.go:282] 1 containers: [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c]
	I1210 01:13:55.839175  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.843310  132605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:13:55.843382  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:13:55.875863  132605 cri.go:89] found id: "bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:55.875888  132605 cri.go:89] found id: ""
	I1210 01:13:55.875896  132605 logs.go:282] 1 containers: [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490]
	I1210 01:13:55.875960  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.879748  132605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:13:55.879819  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:13:55.911243  132605 cri.go:89] found id: "7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:55.911269  132605 cri.go:89] found id: ""
	I1210 01:13:55.911279  132605 logs.go:282] 1 containers: [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04]
	I1210 01:13:55.911342  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.915201  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:13:55.915268  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:13:55.966280  132605 cri.go:89] found id: "c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:55.966308  132605 cri.go:89] found id: ""
	I1210 01:13:55.966318  132605 logs.go:282] 1 containers: [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692]
	I1210 01:13:55.966384  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.970278  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:13:55.970354  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:13:56.004675  132605 cri.go:89] found id: "eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:56.004706  132605 cri.go:89] found id: ""
	I1210 01:13:56.004722  132605 logs.go:282] 1 containers: [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77]
	I1210 01:13:56.004785  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.008534  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:13:56.008614  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:13:56.051252  132605 cri.go:89] found id: "7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:56.051282  132605 cri.go:89] found id: ""
	I1210 01:13:56.051293  132605 logs.go:282] 1 containers: [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f]
	I1210 01:13:56.051356  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.055160  132605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:13:56.055243  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:13:56.100629  132605 cri.go:89] found id: ""
	I1210 01:13:56.100660  132605 logs.go:282] 0 containers: []
	W1210 01:13:56.100672  132605 logs.go:284] No container was found matching "kindnet"
	I1210 01:13:56.100681  132605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 01:13:56.100749  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 01:13:56.140250  132605 cri.go:89] found id: "8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:56.140274  132605 cri.go:89] found id: "abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:56.140280  132605 cri.go:89] found id: ""
	I1210 01:13:56.140290  132605 logs.go:282] 2 containers: [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0]
	I1210 01:13:56.140352  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.145225  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.150128  132605 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:13:56.150151  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:13:56.273696  132605 logs.go:123] Gathering logs for kube-apiserver [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c] ...
	I1210 01:13:56.273730  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:56.323851  132605 logs.go:123] Gathering logs for etcd [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490] ...
	I1210 01:13:56.323884  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:56.375726  132605 logs.go:123] Gathering logs for kube-scheduler [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692] ...
	I1210 01:13:56.375763  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:56.430544  132605 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:13:56.430587  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:13:56.866412  132605 logs.go:123] Gathering logs for storage-provisioner [abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0] ...
	I1210 01:13:56.866505  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:56.901321  132605 logs.go:123] Gathering logs for container status ...
	I1210 01:13:56.901360  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:13:56.940068  132605 logs.go:123] Gathering logs for kubelet ...
	I1210 01:13:56.940107  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:13:57.010688  132605 logs.go:123] Gathering logs for dmesg ...
	I1210 01:13:57.010725  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:13:57.025463  132605 logs.go:123] Gathering logs for coredns [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04] ...
	I1210 01:13:57.025514  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:57.063908  132605 logs.go:123] Gathering logs for kube-proxy [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77] ...
	I1210 01:13:57.063939  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:57.102140  132605 logs.go:123] Gathering logs for kube-controller-manager [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f] ...
	I1210 01:13:57.102182  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:57.154429  132605 logs.go:123] Gathering logs for storage-provisioner [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6] ...
	I1210 01:13:57.154467  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:56.799397  133282 addons.go:510] duration metric: took 1.846376359s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 01:13:57.200860  133282 pod_ready.go:103] pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:59.697834  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:13:59.702097  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 200:
	ok
	I1210 01:13:59.703338  132605 api_server.go:141] control plane version: v1.31.2
	I1210 01:13:59.703360  132605 api_server.go:131] duration metric: took 3.907039005s to wait for apiserver health ...
	I1210 01:13:59.703368  132605 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:13:59.703389  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:13:59.703430  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:13:59.746795  132605 cri.go:89] found id: "0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:59.746815  132605 cri.go:89] found id: ""
	I1210 01:13:59.746822  132605 logs.go:282] 1 containers: [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c]
	I1210 01:13:59.746867  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.750673  132605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:13:59.750736  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:13:59.783121  132605 cri.go:89] found id: "bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:59.783154  132605 cri.go:89] found id: ""
	I1210 01:13:59.783163  132605 logs.go:282] 1 containers: [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490]
	I1210 01:13:59.783210  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.786822  132605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:13:59.786875  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:13:59.819075  132605 cri.go:89] found id: "7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:59.819096  132605 cri.go:89] found id: ""
	I1210 01:13:59.819103  132605 logs.go:282] 1 containers: [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04]
	I1210 01:13:59.819163  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.822836  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:13:59.822886  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:13:59.859388  132605 cri.go:89] found id: "c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:59.859418  132605 cri.go:89] found id: ""
	I1210 01:13:59.859428  132605 logs.go:282] 1 containers: [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692]
	I1210 01:13:59.859482  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.863388  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:13:59.863447  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:13:59.897967  132605 cri.go:89] found id: "eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:59.897987  132605 cri.go:89] found id: ""
	I1210 01:13:59.897994  132605 logs.go:282] 1 containers: [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77]
	I1210 01:13:59.898037  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.902198  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:13:59.902262  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:13:59.935685  132605 cri.go:89] found id: "7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:59.935713  132605 cri.go:89] found id: ""
	I1210 01:13:59.935724  132605 logs.go:282] 1 containers: [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f]
	I1210 01:13:59.935782  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.939600  132605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:13:59.939653  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:13:59.975763  132605 cri.go:89] found id: ""
	I1210 01:13:59.975797  132605 logs.go:282] 0 containers: []
	W1210 01:13:59.975810  132605 logs.go:284] No container was found matching "kindnet"
	I1210 01:13:59.975819  132605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 01:13:59.975886  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 01:14:00.014470  132605 cri.go:89] found id: "8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:14:00.014500  132605 cri.go:89] found id: "abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:14:00.014506  132605 cri.go:89] found id: ""
	I1210 01:14:00.014515  132605 logs.go:282] 2 containers: [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0]
	I1210 01:14:00.014589  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:14:00.018470  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:14:00.022628  132605 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:14:00.022650  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:14:00.126253  132605 logs.go:123] Gathering logs for kube-apiserver [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c] ...
	I1210 01:14:00.126280  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:14:00.168377  132605 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:14:00.168410  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:14:00.554305  132605 logs.go:123] Gathering logs for container status ...
	I1210 01:14:00.554349  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:14:00.597646  132605 logs.go:123] Gathering logs for kube-scheduler [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692] ...
	I1210 01:14:00.597673  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:14:00.638356  132605 logs.go:123] Gathering logs for kube-proxy [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77] ...
	I1210 01:14:00.638385  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:14:00.673027  132605 logs.go:123] Gathering logs for kube-controller-manager [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f] ...
	I1210 01:14:00.673058  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:14:00.736632  132605 logs.go:123] Gathering logs for storage-provisioner [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6] ...
	I1210 01:14:00.736667  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:14:00.771609  132605 logs.go:123] Gathering logs for kubelet ...
	I1210 01:14:00.771643  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:14:00.838511  132605 logs.go:123] Gathering logs for dmesg ...
	I1210 01:14:00.838542  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:14:00.853873  132605 logs.go:123] Gathering logs for etcd [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490] ...
	I1210 01:14:00.853901  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:14:00.903386  132605 logs.go:123] Gathering logs for coredns [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04] ...
	I1210 01:14:00.903417  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:14:00.940479  132605 logs.go:123] Gathering logs for storage-provisioner [abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0] ...
	I1210 01:14:00.940538  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:59.199815  133282 pod_ready.go:93] pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:59.199838  133282 pod_ready.go:82] duration metric: took 4.006706604s for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:59.199848  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:01.206809  133282 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:14:02.205417  133282 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:02.205439  133282 pod_ready.go:82] duration metric: took 3.005584799s for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:02.205449  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:03.479747  132605 system_pods.go:59] 8 kube-system pods found
	I1210 01:14:03.479776  132605 system_pods.go:61] "coredns-7c65d6cfc9-hhsm5" [dddb227a-7c16-4acd-be5f-1ab38b78129c] Running
	I1210 01:14:03.479781  132605 system_pods.go:61] "etcd-no-preload-584179" [acba2a13-196f-4ff9-8151-6b88578d532d] Running
	I1210 01:14:03.479785  132605 system_pods.go:61] "kube-apiserver-no-preload-584179" [20a67076-4ff2-4f31-b245-bf4079cd11d1] Running
	I1210 01:14:03.479789  132605 system_pods.go:61] "kube-controller-manager-no-preload-584179" [d7a2e531-1609-4c8a-b756-d9819339ed27] Running
	I1210 01:14:03.479791  132605 system_pods.go:61] "kube-proxy-xcjs2" [ec6cf5b1-3ea9-4868-874d-61e262cca0c5] Running
	I1210 01:14:03.479795  132605 system_pods.go:61] "kube-scheduler-no-preload-584179" [998543da-3056-4960-8762-9ab3dbd1925a] Running
	I1210 01:14:03.479800  132605 system_pods.go:61] "metrics-server-6867b74b74-lwgxd" [0e7f1063-8508-4f5b-b8ff-bbd387a53919] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:03.479804  132605 system_pods.go:61] "storage-provisioner" [31180637-f48e-4dda-8ec3-56155bb300cf] Running
	I1210 01:14:03.479813  132605 system_pods.go:74] duration metric: took 3.776438741s to wait for pod list to return data ...
	I1210 01:14:03.479820  132605 default_sa.go:34] waiting for default service account to be created ...
	I1210 01:14:03.482188  132605 default_sa.go:45] found service account: "default"
	I1210 01:14:03.482210  132605 default_sa.go:55] duration metric: took 2.383945ms for default service account to be created ...
	I1210 01:14:03.482218  132605 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 01:14:03.487172  132605 system_pods.go:86] 8 kube-system pods found
	I1210 01:14:03.487199  132605 system_pods.go:89] "coredns-7c65d6cfc9-hhsm5" [dddb227a-7c16-4acd-be5f-1ab38b78129c] Running
	I1210 01:14:03.487213  132605 system_pods.go:89] "etcd-no-preload-584179" [acba2a13-196f-4ff9-8151-6b88578d532d] Running
	I1210 01:14:03.487220  132605 system_pods.go:89] "kube-apiserver-no-preload-584179" [20a67076-4ff2-4f31-b245-bf4079cd11d1] Running
	I1210 01:14:03.487227  132605 system_pods.go:89] "kube-controller-manager-no-preload-584179" [d7a2e531-1609-4c8a-b756-d9819339ed27] Running
	I1210 01:14:03.487232  132605 system_pods.go:89] "kube-proxy-xcjs2" [ec6cf5b1-3ea9-4868-874d-61e262cca0c5] Running
	I1210 01:14:03.487239  132605 system_pods.go:89] "kube-scheduler-no-preload-584179" [998543da-3056-4960-8762-9ab3dbd1925a] Running
	I1210 01:14:03.487248  132605 system_pods.go:89] "metrics-server-6867b74b74-lwgxd" [0e7f1063-8508-4f5b-b8ff-bbd387a53919] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:03.487257  132605 system_pods.go:89] "storage-provisioner" [31180637-f48e-4dda-8ec3-56155bb300cf] Running
	I1210 01:14:03.487267  132605 system_pods.go:126] duration metric: took 5.043223ms to wait for k8s-apps to be running ...
	I1210 01:14:03.487278  132605 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 01:14:03.487331  132605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:14:03.503494  132605 system_svc.go:56] duration metric: took 16.208072ms WaitForService to wait for kubelet
	I1210 01:14:03.503520  132605 kubeadm.go:582] duration metric: took 4m22.208203921s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:14:03.503535  132605 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:14:03.506148  132605 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:14:03.506168  132605 node_conditions.go:123] node cpu capacity is 2
	I1210 01:14:03.506181  132605 node_conditions.go:105] duration metric: took 2.641093ms to run NodePressure ...
	I1210 01:14:03.506196  132605 start.go:241] waiting for startup goroutines ...
	I1210 01:14:03.506209  132605 start.go:246] waiting for cluster config update ...
	I1210 01:14:03.506228  132605 start.go:255] writing updated cluster config ...
	I1210 01:14:03.506542  132605 ssh_runner.go:195] Run: rm -f paused
	I1210 01:14:03.552082  132605 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 01:14:03.553885  132605 out.go:177] * Done! kubectl is now configured to use "no-preload-584179" cluster and "default" namespace by default
	I1210 01:14:04.212381  133282 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:14:05.212520  133282 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:05.212542  133282 pod_ready.go:82] duration metric: took 3.007086471s for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.212551  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mcrmk" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.218010  133282 pod_ready.go:93] pod "kube-proxy-mcrmk" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:05.218032  133282 pod_ready.go:82] duration metric: took 5.474042ms for pod "kube-proxy-mcrmk" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.218043  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.226656  133282 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:05.226677  133282 pod_ready.go:82] duration metric: took 8.62491ms for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.226685  133282 pod_ready.go:39] duration metric: took 10.040900009s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:14:05.226701  133282 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:14:05.226760  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:14:05.245203  133282 api_server.go:72] duration metric: took 10.292259038s to wait for apiserver process to appear ...
	I1210 01:14:05.245225  133282 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:14:05.245246  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:14:05.249103  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 200:
	ok
	I1210 01:14:05.250169  133282 api_server.go:141] control plane version: v1.31.2
	I1210 01:14:05.250186  133282 api_server.go:131] duration metric: took 4.954164ms to wait for apiserver health ...
	I1210 01:14:05.250191  133282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:14:05.256313  133282 system_pods.go:59] 9 kube-system pods found
	I1210 01:14:05.256338  133282 system_pods.go:61] "coredns-7c65d6cfc9-4snjr" [ee9574b0-7c13-4fd0-b268-47bef0687b7c] Running
	I1210 01:14:05.256343  133282 system_pods.go:61] "coredns-7c65d6cfc9-wr22x" [51e6d58d-7a5a-4739-94de-c53a8c8247ca] Running
	I1210 01:14:05.256347  133282 system_pods.go:61] "etcd-default-k8s-diff-port-901295" [3e38ffc7-a02d-4449-a166-29eca58e8545] Running
	I1210 01:14:05.256351  133282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-901295" [868a8472-8c93-4376-98c9-4d2bada6bfc0] Running
	I1210 01:14:05.256355  133282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-901295" [730047f5-2dbe-4a89-858a-33e2cc0f52eb] Running
	I1210 01:14:05.256358  133282 system_pods.go:61] "kube-proxy-mcrmk" [ffc0f612-5484-46b4-9515-41e0a981287f] Running
	I1210 01:14:05.256361  133282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-901295" [f5ae0336-bb5d-47ef-94f4-bfd4674adc8e] Running
	I1210 01:14:05.256366  133282 system_pods.go:61] "metrics-server-6867b74b74-rlg4g" [9aae955e-136b-4dbb-a5a5-f7490309bf4e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:05.256376  133282 system_pods.go:61] "storage-provisioner" [06a31677-c5d7-4380-80d3-ec80b787f570] Running
	I1210 01:14:05.256383  133282 system_pods.go:74] duration metric: took 6.186387ms to wait for pod list to return data ...
	I1210 01:14:05.256391  133282 default_sa.go:34] waiting for default service account to be created ...
	I1210 01:14:05.258701  133282 default_sa.go:45] found service account: "default"
	I1210 01:14:05.258720  133282 default_sa.go:55] duration metric: took 2.322746ms for default service account to be created ...
	I1210 01:14:05.258726  133282 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 01:14:05.262756  133282 system_pods.go:86] 9 kube-system pods found
	I1210 01:14:05.262776  133282 system_pods.go:89] "coredns-7c65d6cfc9-4snjr" [ee9574b0-7c13-4fd0-b268-47bef0687b7c] Running
	I1210 01:14:05.262781  133282 system_pods.go:89] "coredns-7c65d6cfc9-wr22x" [51e6d58d-7a5a-4739-94de-c53a8c8247ca] Running
	I1210 01:14:05.262785  133282 system_pods.go:89] "etcd-default-k8s-diff-port-901295" [3e38ffc7-a02d-4449-a166-29eca58e8545] Running
	I1210 01:14:05.262791  133282 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-901295" [868a8472-8c93-4376-98c9-4d2bada6bfc0] Running
	I1210 01:14:05.262795  133282 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-901295" [730047f5-2dbe-4a89-858a-33e2cc0f52eb] Running
	I1210 01:14:05.262799  133282 system_pods.go:89] "kube-proxy-mcrmk" [ffc0f612-5484-46b4-9515-41e0a981287f] Running
	I1210 01:14:05.262802  133282 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-901295" [f5ae0336-bb5d-47ef-94f4-bfd4674adc8e] Running
	I1210 01:14:05.262808  133282 system_pods.go:89] "metrics-server-6867b74b74-rlg4g" [9aae955e-136b-4dbb-a5a5-f7490309bf4e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:05.262812  133282 system_pods.go:89] "storage-provisioner" [06a31677-c5d7-4380-80d3-ec80b787f570] Running
	I1210 01:14:05.262821  133282 system_pods.go:126] duration metric: took 4.090244ms to wait for k8s-apps to be running ...
	I1210 01:14:05.262827  133282 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 01:14:05.262881  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:14:05.275937  133282 system_svc.go:56] duration metric: took 13.102664ms WaitForService to wait for kubelet
	I1210 01:14:05.275962  133282 kubeadm.go:582] duration metric: took 10.323025026s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:14:05.275984  133282 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:14:05.278184  133282 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:14:05.278204  133282 node_conditions.go:123] node cpu capacity is 2
	I1210 01:14:05.278217  133282 node_conditions.go:105] duration metric: took 2.226803ms to run NodePressure ...
	I1210 01:14:05.278230  133282 start.go:241] waiting for startup goroutines ...
	I1210 01:14:05.278239  133282 start.go:246] waiting for cluster config update ...
	I1210 01:14:05.278249  133282 start.go:255] writing updated cluster config ...
	I1210 01:14:05.278553  133282 ssh_runner.go:195] Run: rm -f paused
	I1210 01:14:05.326078  133282 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 01:14:05.327902  133282 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-901295" cluster and "default" namespace by default
	I1210 01:14:04.852302  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:14:04.852558  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:14:44.854749  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:14:44.854980  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:14:44.854992  133241 kubeadm.go:310] 
	I1210 01:14:44.855044  133241 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 01:14:44.855104  133241 kubeadm.go:310] 		timed out waiting for the condition
	I1210 01:14:44.855115  133241 kubeadm.go:310] 
	I1210 01:14:44.855162  133241 kubeadm.go:310] 	This error is likely caused by:
	I1210 01:14:44.855217  133241 kubeadm.go:310] 		- The kubelet is not running
	I1210 01:14:44.855363  133241 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 01:14:44.855380  133241 kubeadm.go:310] 
	I1210 01:14:44.855514  133241 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 01:14:44.855565  133241 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 01:14:44.855615  133241 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 01:14:44.855625  133241 kubeadm.go:310] 
	I1210 01:14:44.855796  133241 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 01:14:44.855943  133241 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 01:14:44.855955  133241 kubeadm.go:310] 
	I1210 01:14:44.856139  133241 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 01:14:44.856299  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 01:14:44.856402  133241 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 01:14:44.856500  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 01:14:44.856525  133241 kubeadm.go:310] 
	I1210 01:14:44.856764  133241 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:14:44.856891  133241 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 01:14:44.856987  133241 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1210 01:14:44.857195  133241 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 01:14:44.857249  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:14:45.319104  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:14:45.333243  133241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:14:45.342637  133241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:14:45.342653  133241 kubeadm.go:157] found existing configuration files:
	
	I1210 01:14:45.342696  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:14:45.351179  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:14:45.351227  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:14:45.359836  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:14:45.368986  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:14:45.369041  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:14:45.378166  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:14:45.387734  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:14:45.387781  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:14:45.397866  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:14:45.406757  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:14:45.406794  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:14:45.416506  133241 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:14:45.484342  133241 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 01:14:45.484462  133241 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:14:45.624435  133241 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:14:45.624583  133241 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:14:45.624732  133241 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 01:14:45.800410  133241 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:14:45.802184  133241 out.go:235]   - Generating certificates and keys ...
	I1210 01:14:45.802296  133241 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:14:45.802393  133241 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:14:45.802504  133241 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:14:45.802601  133241 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:14:45.802707  133241 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:14:45.802780  133241 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:14:45.802867  133241 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:14:45.803320  133241 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:14:45.804003  133241 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:14:45.804623  133241 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:14:45.804904  133241 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:14:45.804997  133241 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:14:45.989500  133241 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:14:46.228462  133241 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:14:46.274395  133241 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:14:46.765291  133241 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:14:46.784318  133241 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:14:46.785620  133241 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:14:46.785694  133241 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:14:46.915963  133241 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:14:46.917607  133241 out.go:235]   - Booting up control plane ...
	I1210 01:14:46.917714  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:14:46.924564  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:14:46.925924  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:14:46.926912  133241 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:14:46.929973  133241 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 01:15:26.932207  133241 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 01:15:26.932539  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:15:26.932718  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:15:31.933200  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:15:31.933463  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:15:41.934297  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:15:41.934592  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:16:01.935227  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:16:01.935409  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:16:41.934005  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:16:41.934329  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:16:41.934361  133241 kubeadm.go:310] 
	I1210 01:16:41.934433  133241 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 01:16:41.934492  133241 kubeadm.go:310] 		timed out waiting for the condition
	I1210 01:16:41.934500  133241 kubeadm.go:310] 
	I1210 01:16:41.934550  133241 kubeadm.go:310] 	This error is likely caused by:
	I1210 01:16:41.934610  133241 kubeadm.go:310] 		- The kubelet is not running
	I1210 01:16:41.934768  133241 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 01:16:41.934791  133241 kubeadm.go:310] 
	I1210 01:16:41.934915  133241 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 01:16:41.934971  133241 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 01:16:41.935024  133241 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 01:16:41.935033  133241 kubeadm.go:310] 
	I1210 01:16:41.935184  133241 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 01:16:41.935327  133241 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 01:16:41.935346  133241 kubeadm.go:310] 
	I1210 01:16:41.935485  133241 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 01:16:41.935600  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 01:16:41.935720  133241 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 01:16:41.935818  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 01:16:41.935828  133241 kubeadm.go:310] 
	I1210 01:16:41.936518  133241 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:16:41.936630  133241 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 01:16:41.936756  133241 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1210 01:16:41.936849  133241 kubeadm.go:394] duration metric: took 7m57.690847315s to StartCluster
	I1210 01:16:41.936924  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:16:41.936994  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:16:41.979911  133241 cri.go:89] found id: ""
	I1210 01:16:41.979944  133241 logs.go:282] 0 containers: []
	W1210 01:16:41.979955  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:16:41.979964  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:16:41.980037  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:16:42.018336  133241 cri.go:89] found id: ""
	I1210 01:16:42.018366  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.018378  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:16:42.018385  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:16:42.018461  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:16:42.050036  133241 cri.go:89] found id: ""
	I1210 01:16:42.050065  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.050074  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:16:42.050080  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:16:42.050139  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:16:42.083023  133241 cri.go:89] found id: ""
	I1210 01:16:42.083051  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.083063  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:16:42.083072  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:16:42.083131  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:16:42.117900  133241 cri.go:89] found id: ""
	I1210 01:16:42.117921  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.117930  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:16:42.117936  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:16:42.117982  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:16:42.150009  133241 cri.go:89] found id: ""
	I1210 01:16:42.150041  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.150054  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:16:42.150063  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:16:42.150116  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:16:42.182606  133241 cri.go:89] found id: ""
	I1210 01:16:42.182632  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.182643  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:16:42.182650  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:16:42.182712  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:16:42.223456  133241 cri.go:89] found id: ""
	I1210 01:16:42.223486  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.223496  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:16:42.223507  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:16:42.223522  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:16:42.287081  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:16:42.287118  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:16:42.308277  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:16:42.308315  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:16:42.401928  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:16:42.401960  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:16:42.401977  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:16:42.515786  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:16:42.515829  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 01:16:42.551865  133241 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1210 01:16:42.551924  133241 out.go:270] * 
	W1210 01:16:42.552001  133241 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 01:16:42.552019  133241 out.go:270] * 
	W1210 01:16:42.552906  133241 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 01:16:42.556458  133241 out.go:201] 
	W1210 01:16:42.557556  133241 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 01:16:42.557619  133241 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 01:16:42.557649  133241 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 01:16:42.559020  133241 out.go:201] 
	
	
	==> CRI-O <==
	Dec 10 01:25:47 old-k8s-version-094470 crio[632]: time="2024-12-10 01:25:47.388554414Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793947388528096,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3c2f114e-1a8f-437f-bf84-fa3dc7ebabf6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:25:47 old-k8s-version-094470 crio[632]: time="2024-12-10 01:25:47.389163342Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ece4e7e2-d22b-4c66-8233-2c5a6eaf5058 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:25:47 old-k8s-version-094470 crio[632]: time="2024-12-10 01:25:47.389228196Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ece4e7e2-d22b-4c66-8233-2c5a6eaf5058 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:25:47 old-k8s-version-094470 crio[632]: time="2024-12-10 01:25:47.389261414Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ece4e7e2-d22b-4c66-8233-2c5a6eaf5058 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:25:47 old-k8s-version-094470 crio[632]: time="2024-12-10 01:25:47.417863362Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a6f11e32-1e6b-4a91-869c-81fda66d215a name=/runtime.v1.RuntimeService/Version
	Dec 10 01:25:47 old-k8s-version-094470 crio[632]: time="2024-12-10 01:25:47.417959052Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a6f11e32-1e6b-4a91-869c-81fda66d215a name=/runtime.v1.RuntimeService/Version
	Dec 10 01:25:47 old-k8s-version-094470 crio[632]: time="2024-12-10 01:25:47.419028998Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1a44223-0149-4094-86fe-8697e24aa236 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:25:47 old-k8s-version-094470 crio[632]: time="2024-12-10 01:25:47.419471254Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793947419402487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1a44223-0149-4094-86fe-8697e24aa236 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:25:47 old-k8s-version-094470 crio[632]: time="2024-12-10 01:25:47.419931570Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b7524a8e-8629-4571-bfd0-890b164c0a9b name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:25:47 old-k8s-version-094470 crio[632]: time="2024-12-10 01:25:47.419998896Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7524a8e-8629-4571-bfd0-890b164c0a9b name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:25:47 old-k8s-version-094470 crio[632]: time="2024-12-10 01:25:47.420044823Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b7524a8e-8629-4571-bfd0-890b164c0a9b name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:25:47 old-k8s-version-094470 crio[632]: time="2024-12-10 01:25:47.449996591Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d87886e-5b75-4e3d-aeac-3813b5d30ad8 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:25:47 old-k8s-version-094470 crio[632]: time="2024-12-10 01:25:47.450075036Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d87886e-5b75-4e3d-aeac-3813b5d30ad8 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:25:47 old-k8s-version-094470 crio[632]: time="2024-12-10 01:25:47.451096838Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7ccc1ca7-e45f-42d4-88bc-6b55d8c9eb04 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:25:47 old-k8s-version-094470 crio[632]: time="2024-12-10 01:25:47.451675493Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793947451654139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ccc1ca7-e45f-42d4-88bc-6b55d8c9eb04 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:25:47 old-k8s-version-094470 crio[632]: time="2024-12-10 01:25:47.452116972Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1774b910-f267-42b9-ba9a-484a7d23fea0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:25:47 old-k8s-version-094470 crio[632]: time="2024-12-10 01:25:47.452166945Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1774b910-f267-42b9-ba9a-484a7d23fea0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:25:47 old-k8s-version-094470 crio[632]: time="2024-12-10 01:25:47.452203346Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1774b910-f267-42b9-ba9a-484a7d23fea0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:25:47 old-k8s-version-094470 crio[632]: time="2024-12-10 01:25:47.481801151Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=32ea2169-b7ab-44d9-83b1-11883b8e0855 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:25:47 old-k8s-version-094470 crio[632]: time="2024-12-10 01:25:47.481876539Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=32ea2169-b7ab-44d9-83b1-11883b8e0855 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:25:47 old-k8s-version-094470 crio[632]: time="2024-12-10 01:25:47.482937570Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=652b9478-0eb9-4d09-ba49-8e1f7631be03 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:25:47 old-k8s-version-094470 crio[632]: time="2024-12-10 01:25:47.483316456Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733793947483294124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=652b9478-0eb9-4d09-ba49-8e1f7631be03 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:25:47 old-k8s-version-094470 crio[632]: time="2024-12-10 01:25:47.483747228Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=932d43dd-efba-4b81-9b3a-d48ac3306a67 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:25:47 old-k8s-version-094470 crio[632]: time="2024-12-10 01:25:47.483807382Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=932d43dd-efba-4b81-9b3a-d48ac3306a67 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:25:47 old-k8s-version-094470 crio[632]: time="2024-12-10 01:25:47.483838222Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=932d43dd-efba-4b81-9b3a-d48ac3306a67 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 01:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.058441] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038433] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.955123] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.919200] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.577947] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.210341] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.056035] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052496] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.200301] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.121921] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.235690] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +5.849695] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.064134] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.756376] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[ +13.680417] kauditd_printk_skb: 46 callbacks suppressed
	[Dec10 01:12] systemd-fstab-generator[5121]: Ignoring "noauto" option for root device
	[Dec10 01:14] systemd-fstab-generator[5409]: Ignoring "noauto" option for root device
	[  +0.065463] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:25:47 up 17 min,  0 users,  load average: 0.16, 0.09, 0.08
	Linux old-k8s-version-094470 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 10 01:25:42 old-k8s-version-094470 kubelet[6584]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:439 +0x6849
	Dec 10 01:25:42 old-k8s-version-094470 kubelet[6584]: goroutine 147 [runnable]:
	Dec 10 01:25:42 old-k8s-version-094470 kubelet[6584]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc00089f340)
	Dec 10 01:25:42 old-k8s-version-094470 kubelet[6584]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1242
	Dec 10 01:25:42 old-k8s-version-094470 kubelet[6584]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Dec 10 01:25:42 old-k8s-version-094470 kubelet[6584]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Dec 10 01:25:42 old-k8s-version-094470 kubelet[6584]: goroutine 148 [select]:
	Dec 10 01:25:42 old-k8s-version-094470 kubelet[6584]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000259ef0, 0xc000200701, 0xc000af9100, 0xc00035df80, 0xc000904640, 0xc000904600)
	Dec 10 01:25:42 old-k8s-version-094470 kubelet[6584]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Dec 10 01:25:42 old-k8s-version-094470 kubelet[6584]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000200780, 0x0, 0x0)
	Dec 10 01:25:42 old-k8s-version-094470 kubelet[6584]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Dec 10 01:25:42 old-k8s-version-094470 kubelet[6584]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc00089f340)
	Dec 10 01:25:42 old-k8s-version-094470 kubelet[6584]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Dec 10 01:25:42 old-k8s-version-094470 kubelet[6584]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Dec 10 01:25:42 old-k8s-version-094470 kubelet[6584]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Dec 10 01:25:42 old-k8s-version-094470 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 10 01:25:42 old-k8s-version-094470 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 01:25:43 old-k8s-version-094470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Dec 10 01:25:43 old-k8s-version-094470 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 10 01:25:43 old-k8s-version-094470 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 10 01:25:43 old-k8s-version-094470 kubelet[6593]: I1210 01:25:43.091518    6593 server.go:416] Version: v1.20.0
	Dec 10 01:25:43 old-k8s-version-094470 kubelet[6593]: I1210 01:25:43.091758    6593 server.go:837] Client rotation is on, will bootstrap in background
	Dec 10 01:25:43 old-k8s-version-094470 kubelet[6593]: I1210 01:25:43.093544    6593 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 10 01:25:43 old-k8s-version-094470 kubelet[6593]: W1210 01:25:43.094251    6593 manager.go:159] Cannot detect current cgroup on cgroup v2
	Dec 10 01:25:43 old-k8s-version-094470 kubelet[6593]: I1210 01:25:43.094515    6593 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-094470 -n old-k8s-version-094470
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-094470 -n old-k8s-version-094470: exit status 2 (243.761016ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-094470" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (378.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-274758 -n embed-certs-274758
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-10 01:28:49.699895781 +0000 UTC m=+6322.779533928
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-274758 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-274758 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.511µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-274758 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-274758 -n embed-certs-274758
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-274758 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-274758 logs -n 25: (1.138063322s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-274758                                  | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 01:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-290541                              | cert-expiration-290541       | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-290541                              | cert-expiration-290541       | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-371895 | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | disable-driver-mounts-371895                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:02 UTC |
	|         | default-k8s-diff-port-901295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-584179             | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-584179                                   | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-274758            | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-274758                                  | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-901295  | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:02 UTC | 10 Dec 24 01:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:02 UTC |                     |
	|         | default-k8s-diff-port-901295                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-094470        | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-584179                  | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-274758                 | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-584179                                   | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC | 10 Dec 24 01:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-274758                                  | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC | 10 Dec 24 01:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-901295       | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-094470                              | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC | 10 Dec 24 01:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-094470             | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC | 10 Dec 24 01:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-094470                              | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC | 10 Dec 24 01:14 UTC |
	|         | default-k8s-diff-port-901295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-094470                              | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:28 UTC | 10 Dec 24 01:28 UTC |
	| start   | -p newest-cni-967831 --memory=2200 --alsologtostderr   | newest-cni-967831            | jenkins | v1.34.0 | 10 Dec 24 01:28 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-584179                                   | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:28 UTC | 10 Dec 24 01:28 UTC |
	| start   | -p auto-796478 --memory=3072                           | auto-796478                  | jenkins | v1.34.0 | 10 Dec 24 01:28 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/10 01:28:33
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 01:28:33.142248  140280 out.go:345] Setting OutFile to fd 1 ...
	I1210 01:28:33.142497  140280 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:28:33.142506  140280 out.go:358] Setting ErrFile to fd 2...
	I1210 01:28:33.142511  140280 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:28:33.142738  140280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 01:28:33.143304  140280 out.go:352] Setting JSON to false
	I1210 01:28:33.144283  140280 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":11464,"bootTime":1733782649,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 01:28:33.144396  140280 start.go:139] virtualization: kvm guest
	I1210 01:28:33.146438  140280 out.go:177] * [auto-796478] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 01:28:33.147647  140280 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 01:28:33.147669  140280 notify.go:220] Checking for updates...
	I1210 01:28:33.149916  140280 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 01:28:33.151210  140280 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:28:33.152492  140280 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 01:28:33.153613  140280 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 01:28:33.154792  140280 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 01:28:33.156365  140280 config.go:182] Loaded profile config "default-k8s-diff-port-901295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:28:33.156537  140280 config.go:182] Loaded profile config "embed-certs-274758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:28:33.156689  140280 config.go:182] Loaded profile config "newest-cni-967831": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:28:33.156772  140280 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 01:28:33.193193  140280 out.go:177] * Using the kvm2 driver based on user configuration
	I1210 01:28:33.194345  140280 start.go:297] selected driver: kvm2
	I1210 01:28:33.194358  140280 start.go:901] validating driver "kvm2" against <nil>
	I1210 01:28:33.194373  140280 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 01:28:33.195252  140280 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 01:28:33.195344  140280 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 01:28:33.210930  140280 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1210 01:28:33.210972  140280 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1210 01:28:33.211213  140280 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:28:33.211248  140280 cni.go:84] Creating CNI manager for ""
	I1210 01:28:33.211294  140280 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:28:33.211302  140280 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1210 01:28:33.211347  140280 start.go:340] cluster config:
	{Name:auto-796478 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:auto-796478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:28:33.211437  140280 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 01:28:33.212960  140280 out.go:177] * Starting "auto-796478" primary control-plane node in "auto-796478" cluster
	I1210 01:28:30.603611  140031 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1210 01:28:30.603791  140031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:28:30.603833  140031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:28:30.623394  140031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45739
	I1210 01:28:30.623964  140031 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:28:30.624620  140031 main.go:141] libmachine: Using API Version  1
	I1210 01:28:30.624653  140031 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:28:30.624982  140031 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:28:30.625172  140031 main.go:141] libmachine: (newest-cni-967831) Calling .GetMachineName
	I1210 01:28:30.625346  140031 main.go:141] libmachine: (newest-cni-967831) Calling .DriverName
	I1210 01:28:30.625520  140031 start.go:159] libmachine.API.Create for "newest-cni-967831" (driver="kvm2")
	I1210 01:28:30.625559  140031 client.go:168] LocalClient.Create starting
	I1210 01:28:30.625590  140031 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem
	I1210 01:28:30.625624  140031 main.go:141] libmachine: Decoding PEM data...
	I1210 01:28:30.625641  140031 main.go:141] libmachine: Parsing certificate...
	I1210 01:28:30.625711  140031 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem
	I1210 01:28:30.625730  140031 main.go:141] libmachine: Decoding PEM data...
	I1210 01:28:30.625741  140031 main.go:141] libmachine: Parsing certificate...
	I1210 01:28:30.625754  140031 main.go:141] libmachine: Running pre-create checks...
	I1210 01:28:30.625768  140031 main.go:141] libmachine: (newest-cni-967831) Calling .PreCreateCheck
	I1210 01:28:30.626093  140031 main.go:141] libmachine: (newest-cni-967831) Calling .GetConfigRaw
	I1210 01:28:30.626549  140031 main.go:141] libmachine: Creating machine...
	I1210 01:28:30.626603  140031 main.go:141] libmachine: (newest-cni-967831) Calling .Create
	I1210 01:28:30.626789  140031 main.go:141] libmachine: (newest-cni-967831) Creating KVM machine...
	I1210 01:28:30.628392  140031 main.go:141] libmachine: (newest-cni-967831) DBG | found existing default KVM network
	I1210 01:28:30.630242  140031 main.go:141] libmachine: (newest-cni-967831) DBG | I1210 01:28:30.629998  140053 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:cb:44:7d} reservation:<nil>}
	I1210 01:28:30.631188  140031 main.go:141] libmachine: (newest-cni-967831) DBG | I1210 01:28:30.631097  140053 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:c2:16:a5} reservation:<nil>}
	I1210 01:28:30.632517  140031 main.go:141] libmachine: (newest-cni-967831) DBG | I1210 01:28:30.632420  140053 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003927e0}
	I1210 01:28:30.632601  140031 main.go:141] libmachine: (newest-cni-967831) DBG | created network xml: 
	I1210 01:28:30.632621  140031 main.go:141] libmachine: (newest-cni-967831) DBG | <network>
	I1210 01:28:30.632646  140031 main.go:141] libmachine: (newest-cni-967831) DBG |   <name>mk-newest-cni-967831</name>
	I1210 01:28:30.632671  140031 main.go:141] libmachine: (newest-cni-967831) DBG |   <dns enable='no'/>
	I1210 01:28:30.632680  140031 main.go:141] libmachine: (newest-cni-967831) DBG |   
	I1210 01:28:30.632690  140031 main.go:141] libmachine: (newest-cni-967831) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1210 01:28:30.632698  140031 main.go:141] libmachine: (newest-cni-967831) DBG |     <dhcp>
	I1210 01:28:30.632707  140031 main.go:141] libmachine: (newest-cni-967831) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1210 01:28:30.632719  140031 main.go:141] libmachine: (newest-cni-967831) DBG |     </dhcp>
	I1210 01:28:30.632728  140031 main.go:141] libmachine: (newest-cni-967831) DBG |   </ip>
	I1210 01:28:30.632741  140031 main.go:141] libmachine: (newest-cni-967831) DBG |   
	I1210 01:28:30.632747  140031 main.go:141] libmachine: (newest-cni-967831) DBG | </network>
	I1210 01:28:30.632761  140031 main.go:141] libmachine: (newest-cni-967831) DBG | 
	I1210 01:28:30.639038  140031 main.go:141] libmachine: (newest-cni-967831) DBG | trying to create private KVM network mk-newest-cni-967831 192.168.61.0/24...
	I1210 01:28:30.726107  140031 main.go:141] libmachine: (newest-cni-967831) DBG | private KVM network mk-newest-cni-967831 192.168.61.0/24 created
	I1210 01:28:30.726137  140031 main.go:141] libmachine: (newest-cni-967831) Setting up store path in /home/jenkins/minikube-integration/20062-79135/.minikube/machines/newest-cni-967831 ...
	I1210 01:28:30.726150  140031 main.go:141] libmachine: (newest-cni-967831) DBG | I1210 01:28:30.726084  140053 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 01:28:30.726161  140031 main.go:141] libmachine: (newest-cni-967831) Building disk image from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1210 01:28:30.726285  140031 main.go:141] libmachine: (newest-cni-967831) Downloading /home/jenkins/minikube-integration/20062-79135/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1210 01:28:31.071362  140031 main.go:141] libmachine: (newest-cni-967831) DBG | I1210 01:28:31.071181  140053 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/newest-cni-967831/id_rsa...
	I1210 01:28:31.276835  140031 main.go:141] libmachine: (newest-cni-967831) DBG | I1210 01:28:31.276696  140053 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/newest-cni-967831/newest-cni-967831.rawdisk...
	I1210 01:28:31.276876  140031 main.go:141] libmachine: (newest-cni-967831) DBG | Writing magic tar header
	I1210 01:28:31.276892  140031 main.go:141] libmachine: (newest-cni-967831) DBG | Writing SSH key tar header
	I1210 01:28:31.276900  140031 main.go:141] libmachine: (newest-cni-967831) DBG | I1210 01:28:31.276843  140053 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/newest-cni-967831 ...
	I1210 01:28:31.277009  140031 main.go:141] libmachine: (newest-cni-967831) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/newest-cni-967831
	I1210 01:28:31.277046  140031 main.go:141] libmachine: (newest-cni-967831) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/newest-cni-967831 (perms=drwx------)
	I1210 01:28:31.277057  140031 main.go:141] libmachine: (newest-cni-967831) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines
	I1210 01:28:31.277069  140031 main.go:141] libmachine: (newest-cni-967831) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 01:28:31.277075  140031 main.go:141] libmachine: (newest-cni-967831) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135
	I1210 01:28:31.277083  140031 main.go:141] libmachine: (newest-cni-967831) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1210 01:28:31.277093  140031 main.go:141] libmachine: (newest-cni-967831) DBG | Checking permissions on dir: /home/jenkins
	I1210 01:28:31.277108  140031 main.go:141] libmachine: (newest-cni-967831) DBG | Checking permissions on dir: /home
	I1210 01:28:31.277116  140031 main.go:141] libmachine: (newest-cni-967831) DBG | Skipping /home - not owner
	I1210 01:28:31.277145  140031 main.go:141] libmachine: (newest-cni-967831) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines (perms=drwxr-xr-x)
	I1210 01:28:31.277173  140031 main.go:141] libmachine: (newest-cni-967831) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube (perms=drwxr-xr-x)
	I1210 01:28:31.277201  140031 main.go:141] libmachine: (newest-cni-967831) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135 (perms=drwxrwxr-x)
	I1210 01:28:31.277211  140031 main.go:141] libmachine: (newest-cni-967831) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 01:28:31.277223  140031 main.go:141] libmachine: (newest-cni-967831) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 01:28:31.277230  140031 main.go:141] libmachine: (newest-cni-967831) Creating domain...
	I1210 01:28:31.278667  140031 main.go:141] libmachine: (newest-cni-967831) define libvirt domain using xml: 
	I1210 01:28:31.278707  140031 main.go:141] libmachine: (newest-cni-967831) <domain type='kvm'>
	I1210 01:28:31.278732  140031 main.go:141] libmachine: (newest-cni-967831)   <name>newest-cni-967831</name>
	I1210 01:28:31.278746  140031 main.go:141] libmachine: (newest-cni-967831)   <memory unit='MiB'>2200</memory>
	I1210 01:28:31.278760  140031 main.go:141] libmachine: (newest-cni-967831)   <vcpu>2</vcpu>
	I1210 01:28:31.278766  140031 main.go:141] libmachine: (newest-cni-967831)   <features>
	I1210 01:28:31.278811  140031 main.go:141] libmachine: (newest-cni-967831)     <acpi/>
	I1210 01:28:31.278829  140031 main.go:141] libmachine: (newest-cni-967831)     <apic/>
	I1210 01:28:31.278841  140031 main.go:141] libmachine: (newest-cni-967831)     <pae/>
	I1210 01:28:31.278851  140031 main.go:141] libmachine: (newest-cni-967831)     
	I1210 01:28:31.278860  140031 main.go:141] libmachine: (newest-cni-967831)   </features>
	I1210 01:28:31.278871  140031 main.go:141] libmachine: (newest-cni-967831)   <cpu mode='host-passthrough'>
	I1210 01:28:31.278879  140031 main.go:141] libmachine: (newest-cni-967831)   
	I1210 01:28:31.278888  140031 main.go:141] libmachine: (newest-cni-967831)   </cpu>
	I1210 01:28:31.278906  140031 main.go:141] libmachine: (newest-cni-967831)   <os>
	I1210 01:28:31.278922  140031 main.go:141] libmachine: (newest-cni-967831)     <type>hvm</type>
	I1210 01:28:31.278934  140031 main.go:141] libmachine: (newest-cni-967831)     <boot dev='cdrom'/>
	I1210 01:28:31.278944  140031 main.go:141] libmachine: (newest-cni-967831)     <boot dev='hd'/>
	I1210 01:28:31.278953  140031 main.go:141] libmachine: (newest-cni-967831)     <bootmenu enable='no'/>
	I1210 01:28:31.278962  140031 main.go:141] libmachine: (newest-cni-967831)   </os>
	I1210 01:28:31.278970  140031 main.go:141] libmachine: (newest-cni-967831)   <devices>
	I1210 01:28:31.278981  140031 main.go:141] libmachine: (newest-cni-967831)     <disk type='file' device='cdrom'>
	I1210 01:28:31.278998  140031 main.go:141] libmachine: (newest-cni-967831)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/newest-cni-967831/boot2docker.iso'/>
	I1210 01:28:31.279010  140031 main.go:141] libmachine: (newest-cni-967831)       <target dev='hdc' bus='scsi'/>
	I1210 01:28:31.279020  140031 main.go:141] libmachine: (newest-cni-967831)       <readonly/>
	I1210 01:28:31.279028  140031 main.go:141] libmachine: (newest-cni-967831)     </disk>
	I1210 01:28:31.279038  140031 main.go:141] libmachine: (newest-cni-967831)     <disk type='file' device='disk'>
	I1210 01:28:31.279049  140031 main.go:141] libmachine: (newest-cni-967831)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1210 01:28:31.279076  140031 main.go:141] libmachine: (newest-cni-967831)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/newest-cni-967831/newest-cni-967831.rawdisk'/>
	I1210 01:28:31.279088  140031 main.go:141] libmachine: (newest-cni-967831)       <target dev='hda' bus='virtio'/>
	I1210 01:28:31.279097  140031 main.go:141] libmachine: (newest-cni-967831)     </disk>
	I1210 01:28:31.279108  140031 main.go:141] libmachine: (newest-cni-967831)     <interface type='network'>
	I1210 01:28:31.279117  140031 main.go:141] libmachine: (newest-cni-967831)       <source network='mk-newest-cni-967831'/>
	I1210 01:28:31.279132  140031 main.go:141] libmachine: (newest-cni-967831)       <model type='virtio'/>
	I1210 01:28:31.279143  140031 main.go:141] libmachine: (newest-cni-967831)     </interface>
	I1210 01:28:31.279150  140031 main.go:141] libmachine: (newest-cni-967831)     <interface type='network'>
	I1210 01:28:31.279162  140031 main.go:141] libmachine: (newest-cni-967831)       <source network='default'/>
	I1210 01:28:31.279176  140031 main.go:141] libmachine: (newest-cni-967831)       <model type='virtio'/>
	I1210 01:28:31.279188  140031 main.go:141] libmachine: (newest-cni-967831)     </interface>
	I1210 01:28:31.279198  140031 main.go:141] libmachine: (newest-cni-967831)     <serial type='pty'>
	I1210 01:28:31.279206  140031 main.go:141] libmachine: (newest-cni-967831)       <target port='0'/>
	I1210 01:28:31.279215  140031 main.go:141] libmachine: (newest-cni-967831)     </serial>
	I1210 01:28:31.279232  140031 main.go:141] libmachine: (newest-cni-967831)     <console type='pty'>
	I1210 01:28:31.279243  140031 main.go:141] libmachine: (newest-cni-967831)       <target type='serial' port='0'/>
	I1210 01:28:31.279250  140031 main.go:141] libmachine: (newest-cni-967831)     </console>
	I1210 01:28:31.279270  140031 main.go:141] libmachine: (newest-cni-967831)     <rng model='virtio'>
	I1210 01:28:31.279290  140031 main.go:141] libmachine: (newest-cni-967831)       <backend model='random'>/dev/random</backend>
	I1210 01:28:31.279302  140031 main.go:141] libmachine: (newest-cni-967831)     </rng>
	I1210 01:28:31.279312  140031 main.go:141] libmachine: (newest-cni-967831)     
	I1210 01:28:31.279343  140031 main.go:141] libmachine: (newest-cni-967831)     
	I1210 01:28:31.279356  140031 main.go:141] libmachine: (newest-cni-967831)   </devices>
	I1210 01:28:31.279368  140031 main.go:141] libmachine: (newest-cni-967831) </domain>
	I1210 01:28:31.279379  140031 main.go:141] libmachine: (newest-cni-967831) 
	I1210 01:28:31.284099  140031 main.go:141] libmachine: (newest-cni-967831) DBG | domain newest-cni-967831 has defined MAC address 52:54:00:07:f3:d2 in network default
	I1210 01:28:31.284853  140031 main.go:141] libmachine: (newest-cni-967831) Ensuring networks are active...
	I1210 01:28:31.284897  140031 main.go:141] libmachine: (newest-cni-967831) DBG | domain newest-cni-967831 has defined MAC address 52:54:00:d8:00:d3 in network mk-newest-cni-967831
	I1210 01:28:31.285777  140031 main.go:141] libmachine: (newest-cni-967831) Ensuring network default is active
	I1210 01:28:31.286241  140031 main.go:141] libmachine: (newest-cni-967831) Ensuring network mk-newest-cni-967831 is active
	I1210 01:28:31.286899  140031 main.go:141] libmachine: (newest-cni-967831) Getting domain xml...
	I1210 01:28:31.287824  140031 main.go:141] libmachine: (newest-cni-967831) Creating domain...
	I1210 01:28:32.965654  140031 main.go:141] libmachine: (newest-cni-967831) Waiting to get IP...
	I1210 01:28:32.966480  140031 main.go:141] libmachine: (newest-cni-967831) DBG | domain newest-cni-967831 has defined MAC address 52:54:00:d8:00:d3 in network mk-newest-cni-967831
	I1210 01:28:32.966974  140031 main.go:141] libmachine: (newest-cni-967831) DBG | unable to find current IP address of domain newest-cni-967831 in network mk-newest-cni-967831
	I1210 01:28:32.967004  140031 main.go:141] libmachine: (newest-cni-967831) DBG | I1210 01:28:32.966945  140053 retry.go:31] will retry after 251.998027ms: waiting for machine to come up
	I1210 01:28:33.220341  140031 main.go:141] libmachine: (newest-cni-967831) DBG | domain newest-cni-967831 has defined MAC address 52:54:00:d8:00:d3 in network mk-newest-cni-967831
	I1210 01:28:33.220790  140031 main.go:141] libmachine: (newest-cni-967831) DBG | unable to find current IP address of domain newest-cni-967831 in network mk-newest-cni-967831
	I1210 01:28:33.220815  140031 main.go:141] libmachine: (newest-cni-967831) DBG | I1210 01:28:33.220741  140053 retry.go:31] will retry after 306.979295ms: waiting for machine to come up
	I1210 01:28:33.529350  140031 main.go:141] libmachine: (newest-cni-967831) DBG | domain newest-cni-967831 has defined MAC address 52:54:00:d8:00:d3 in network mk-newest-cni-967831
	I1210 01:28:33.529857  140031 main.go:141] libmachine: (newest-cni-967831) DBG | unable to find current IP address of domain newest-cni-967831 in network mk-newest-cni-967831
	I1210 01:28:33.529887  140031 main.go:141] libmachine: (newest-cni-967831) DBG | I1210 01:28:33.529817  140053 retry.go:31] will retry after 460.325111ms: waiting for machine to come up
	I1210 01:28:33.991428  140031 main.go:141] libmachine: (newest-cni-967831) DBG | domain newest-cni-967831 has defined MAC address 52:54:00:d8:00:d3 in network mk-newest-cni-967831
	I1210 01:28:33.991901  140031 main.go:141] libmachine: (newest-cni-967831) DBG | unable to find current IP address of domain newest-cni-967831 in network mk-newest-cni-967831
	I1210 01:28:33.991931  140031 main.go:141] libmachine: (newest-cni-967831) DBG | I1210 01:28:33.991853  140053 retry.go:31] will retry after 505.200103ms: waiting for machine to come up
	I1210 01:28:34.498194  140031 main.go:141] libmachine: (newest-cni-967831) DBG | domain newest-cni-967831 has defined MAC address 52:54:00:d8:00:d3 in network mk-newest-cni-967831
	I1210 01:28:34.498615  140031 main.go:141] libmachine: (newest-cni-967831) DBG | unable to find current IP address of domain newest-cni-967831 in network mk-newest-cni-967831
	I1210 01:28:34.498638  140031 main.go:141] libmachine: (newest-cni-967831) DBG | I1210 01:28:34.498555  140053 retry.go:31] will retry after 660.563534ms: waiting for machine to come up
	I1210 01:28:35.160355  140031 main.go:141] libmachine: (newest-cni-967831) DBG | domain newest-cni-967831 has defined MAC address 52:54:00:d8:00:d3 in network mk-newest-cni-967831
	I1210 01:28:35.160838  140031 main.go:141] libmachine: (newest-cni-967831) DBG | unable to find current IP address of domain newest-cni-967831 in network mk-newest-cni-967831
	I1210 01:28:35.160869  140031 main.go:141] libmachine: (newest-cni-967831) DBG | I1210 01:28:35.160791  140053 retry.go:31] will retry after 708.043529ms: waiting for machine to come up
	I1210 01:28:33.214037  140280 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:28:33.214083  140280 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1210 01:28:33.214093  140280 cache.go:56] Caching tarball of preloaded images
	I1210 01:28:33.214168  140280 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 01:28:33.214183  140280 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 01:28:33.214286  140280 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/auto-796478/config.json ...
	I1210 01:28:33.214306  140280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/auto-796478/config.json: {Name:mkfe86fa32fb11cb2ae23c31f188dcf0924e767f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:28:33.214466  140280 start.go:360] acquireMachinesLock for auto-796478: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 01:28:35.870312  140031 main.go:141] libmachine: (newest-cni-967831) DBG | domain newest-cni-967831 has defined MAC address 52:54:00:d8:00:d3 in network mk-newest-cni-967831
	I1210 01:28:35.870818  140031 main.go:141] libmachine: (newest-cni-967831) DBG | unable to find current IP address of domain newest-cni-967831 in network mk-newest-cni-967831
	I1210 01:28:35.870851  140031 main.go:141] libmachine: (newest-cni-967831) DBG | I1210 01:28:35.870758  140053 retry.go:31] will retry after 1.03419145s: waiting for machine to come up
	I1210 01:28:36.906033  140031 main.go:141] libmachine: (newest-cni-967831) DBG | domain newest-cni-967831 has defined MAC address 52:54:00:d8:00:d3 in network mk-newest-cni-967831
	I1210 01:28:36.906519  140031 main.go:141] libmachine: (newest-cni-967831) DBG | unable to find current IP address of domain newest-cni-967831 in network mk-newest-cni-967831
	I1210 01:28:36.906551  140031 main.go:141] libmachine: (newest-cni-967831) DBG | I1210 01:28:36.906477  140053 retry.go:31] will retry after 1.336459793s: waiting for machine to come up
	I1210 01:28:38.244661  140031 main.go:141] libmachine: (newest-cni-967831) DBG | domain newest-cni-967831 has defined MAC address 52:54:00:d8:00:d3 in network mk-newest-cni-967831
	I1210 01:28:38.245166  140031 main.go:141] libmachine: (newest-cni-967831) DBG | unable to find current IP address of domain newest-cni-967831 in network mk-newest-cni-967831
	I1210 01:28:38.245196  140031 main.go:141] libmachine: (newest-cni-967831) DBG | I1210 01:28:38.245083  140053 retry.go:31] will retry after 1.172744041s: waiting for machine to come up
	I1210 01:28:39.419220  140031 main.go:141] libmachine: (newest-cni-967831) DBG | domain newest-cni-967831 has defined MAC address 52:54:00:d8:00:d3 in network mk-newest-cni-967831
	I1210 01:28:39.419592  140031 main.go:141] libmachine: (newest-cni-967831) DBG | unable to find current IP address of domain newest-cni-967831 in network mk-newest-cni-967831
	I1210 01:28:39.419622  140031 main.go:141] libmachine: (newest-cni-967831) DBG | I1210 01:28:39.419544  140053 retry.go:31] will retry after 1.521960679s: waiting for machine to come up
	I1210 01:28:40.943258  140031 main.go:141] libmachine: (newest-cni-967831) DBG | domain newest-cni-967831 has defined MAC address 52:54:00:d8:00:d3 in network mk-newest-cni-967831
	I1210 01:28:40.943680  140031 main.go:141] libmachine: (newest-cni-967831) DBG | unable to find current IP address of domain newest-cni-967831 in network mk-newest-cni-967831
	I1210 01:28:40.943704  140031 main.go:141] libmachine: (newest-cni-967831) DBG | I1210 01:28:40.943643  140053 retry.go:31] will retry after 2.517670612s: waiting for machine to come up
	I1210 01:28:43.462455  140031 main.go:141] libmachine: (newest-cni-967831) DBG | domain newest-cni-967831 has defined MAC address 52:54:00:d8:00:d3 in network mk-newest-cni-967831
	I1210 01:28:43.462934  140031 main.go:141] libmachine: (newest-cni-967831) DBG | unable to find current IP address of domain newest-cni-967831 in network mk-newest-cni-967831
	I1210 01:28:43.462965  140031 main.go:141] libmachine: (newest-cni-967831) DBG | I1210 01:28:43.462891  140053 retry.go:31] will retry after 3.200352661s: waiting for machine to come up
	
	
	==> CRI-O <==
	Dec 10 01:28:50 embed-certs-274758 crio[718]: time="2024-12-10 01:28:50.266251726Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794130266231869,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ef188da9-14d7-4e86-b7eb-65477cd0dc50 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:28:50 embed-certs-274758 crio[718]: time="2024-12-10 01:28:50.266771786Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f405fae-ecb9-4810-b41c-6acc6ea0ede7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:50 embed-certs-274758 crio[718]: time="2024-12-10 01:28:50.266826725Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f405fae-ecb9-4810-b41c-6acc6ea0ede7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:50 embed-certs-274758 crio[718]: time="2024-12-10 01:28:50.267076608Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:539ca3cb672dc5a83e86ae5c31705ee7c3e3d1d00d7f20c3e3d09db37492e97f,PodSandboxId:41e8d06cd50568f5d4a172e0c41ef4292927a09467a284e2feb14a073a350615,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733793200028671668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e4d38f-b0fe-43cf-a844-ba787287fda6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55a7c60e436fac7a6376974b0733a09608016535ca692ffbe443e635d82d1170,PodSandboxId:df58a6a4b4b98b3d68c748f8692fd21feedd8a998cc8c452d8269fb561d49411,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793199649507753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bgjgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 277d23ef-ff20-414d-beb6-c6982712a423,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3ff20847120df12a65c14c8c95260d83dfd25ee36e9187815c6e7884387915,PodSandboxId:e7a4c1081cefac10a4214b4cc646e1891af84fcd7bf1d08728a6c8f72ef013b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793199583563499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m4qgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
1253d1b-c010-41e2-9286-e9930025e9ff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d5ee8060a1e08c1c4ba1136c0629a7997d6bd5a5df123954998284ae05253f,PodSandboxId:41e5ee0c3296f07af17418441ca07ee113fcc0cf3afd4441e81baf97c2edf92c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733793198861319737,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v28mz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd47cc1-a085-4e77-850d-dde0c8ed6054,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9ca46cabc94b73b3a51caad200e0e9799e4a3537f99e32ab2b0cd417c0ab7db,PodSandboxId:f8b897e6d6a537fa4d8ae1ed8b9dd80c864e68c576e328ea97aebb219a91a6cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733793188118660133,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f70de46867691833f10ec303c300c8f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8658835ca140bd6228a30fe8124de708f05d50d5def0b2056e923fa050514de7,PodSandboxId:a4e108c8f05dcb6039dc92adc6d26097a6f97ed0af7dcfc6ddb40d919f92adbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733793188077836579,Labels:map[string]st
ring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d1ab12683a6c965f20a7467f588bc94,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bebe7b8c93db1940dbf7a305b59b0b9c675c5d3c03a30e852340357e4835a284,PodSandboxId:1c0a6de6f1d87ea4dbb532ae95c689a60779d2c96adb0636b1e6e48bae719680,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733793188091155407,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e4ade909c7172eb501f725fba84f76e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29eefdbc8574b415ab0f94d95c3114d03615c96fef747dad1081739f33126932,PodSandboxId:767ea99b563004be5f52740ce343bfaaff2fe3467f32172416d92a5fb9212758,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733793188072241167,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b6b99b806e5357d10ab1acbd63fc7fa,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e90d02b1492860d29809638b72ff024a3ee9e206d3c17200ee51b1f3615c24,PodSandboxId:d49f09c506027d724ab68bc7880ba05cf66c71379a1bbe3fc06b669311c4fc08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733792906352704503,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f70de46867691833f10ec303c300c8f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1f405fae-ecb9-4810-b41c-6acc6ea0ede7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:50 embed-certs-274758 crio[718]: time="2024-12-10 01:28:50.300153394Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fbb225b9-f69f-4910-847c-1a1aa3bddbab name=/runtime.v1.RuntimeService/Version
	Dec 10 01:28:50 embed-certs-274758 crio[718]: time="2024-12-10 01:28:50.300243040Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fbb225b9-f69f-4910-847c-1a1aa3bddbab name=/runtime.v1.RuntimeService/Version
	Dec 10 01:28:50 embed-certs-274758 crio[718]: time="2024-12-10 01:28:50.301064281Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7bc95c42-2527-4a31-9abf-a5da8a371c87 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:28:50 embed-certs-274758 crio[718]: time="2024-12-10 01:28:50.301426121Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794130301402887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7bc95c42-2527-4a31-9abf-a5da8a371c87 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:28:50 embed-certs-274758 crio[718]: time="2024-12-10 01:28:50.301808311Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f046c787-c835-4076-b979-c0d85228919a name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:50 embed-certs-274758 crio[718]: time="2024-12-10 01:28:50.301854505Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f046c787-c835-4076-b979-c0d85228919a name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:50 embed-certs-274758 crio[718]: time="2024-12-10 01:28:50.302263765Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:539ca3cb672dc5a83e86ae5c31705ee7c3e3d1d00d7f20c3e3d09db37492e97f,PodSandboxId:41e8d06cd50568f5d4a172e0c41ef4292927a09467a284e2feb14a073a350615,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733793200028671668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e4d38f-b0fe-43cf-a844-ba787287fda6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55a7c60e436fac7a6376974b0733a09608016535ca692ffbe443e635d82d1170,PodSandboxId:df58a6a4b4b98b3d68c748f8692fd21feedd8a998cc8c452d8269fb561d49411,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793199649507753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bgjgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 277d23ef-ff20-414d-beb6-c6982712a423,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3ff20847120df12a65c14c8c95260d83dfd25ee36e9187815c6e7884387915,PodSandboxId:e7a4c1081cefac10a4214b4cc646e1891af84fcd7bf1d08728a6c8f72ef013b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793199583563499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m4qgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
1253d1b-c010-41e2-9286-e9930025e9ff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d5ee8060a1e08c1c4ba1136c0629a7997d6bd5a5df123954998284ae05253f,PodSandboxId:41e5ee0c3296f07af17418441ca07ee113fcc0cf3afd4441e81baf97c2edf92c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733793198861319737,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v28mz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd47cc1-a085-4e77-850d-dde0c8ed6054,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9ca46cabc94b73b3a51caad200e0e9799e4a3537f99e32ab2b0cd417c0ab7db,PodSandboxId:f8b897e6d6a537fa4d8ae1ed8b9dd80c864e68c576e328ea97aebb219a91a6cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733793188118660133,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f70de46867691833f10ec303c300c8f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8658835ca140bd6228a30fe8124de708f05d50d5def0b2056e923fa050514de7,PodSandboxId:a4e108c8f05dcb6039dc92adc6d26097a6f97ed0af7dcfc6ddb40d919f92adbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733793188077836579,Labels:map[string]st
ring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d1ab12683a6c965f20a7467f588bc94,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bebe7b8c93db1940dbf7a305b59b0b9c675c5d3c03a30e852340357e4835a284,PodSandboxId:1c0a6de6f1d87ea4dbb532ae95c689a60779d2c96adb0636b1e6e48bae719680,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733793188091155407,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e4ade909c7172eb501f725fba84f76e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29eefdbc8574b415ab0f94d95c3114d03615c96fef747dad1081739f33126932,PodSandboxId:767ea99b563004be5f52740ce343bfaaff2fe3467f32172416d92a5fb9212758,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733793188072241167,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b6b99b806e5357d10ab1acbd63fc7fa,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e90d02b1492860d29809638b72ff024a3ee9e206d3c17200ee51b1f3615c24,PodSandboxId:d49f09c506027d724ab68bc7880ba05cf66c71379a1bbe3fc06b669311c4fc08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733792906352704503,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f70de46867691833f10ec303c300c8f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f046c787-c835-4076-b979-c0d85228919a name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:50 embed-certs-274758 crio[718]: time="2024-12-10 01:28:50.334733479Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3125da8-fdc0-4b27-bb5d-237fac45d98c name=/runtime.v1.RuntimeService/Version
	Dec 10 01:28:50 embed-certs-274758 crio[718]: time="2024-12-10 01:28:50.334800478Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3125da8-fdc0-4b27-bb5d-237fac45d98c name=/runtime.v1.RuntimeService/Version
	Dec 10 01:28:50 embed-certs-274758 crio[718]: time="2024-12-10 01:28:50.335726526Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa6efd0f-1e5d-48a2-8c38-93cfed273228 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:28:50 embed-certs-274758 crio[718]: time="2024-12-10 01:28:50.336147433Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794130336127803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa6efd0f-1e5d-48a2-8c38-93cfed273228 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:28:50 embed-certs-274758 crio[718]: time="2024-12-10 01:28:50.336756630Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81873dbc-10f2-43a0-a75b-c01b5a5c358c name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:50 embed-certs-274758 crio[718]: time="2024-12-10 01:28:50.336811366Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81873dbc-10f2-43a0-a75b-c01b5a5c358c name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:50 embed-certs-274758 crio[718]: time="2024-12-10 01:28:50.337108483Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:539ca3cb672dc5a83e86ae5c31705ee7c3e3d1d00d7f20c3e3d09db37492e97f,PodSandboxId:41e8d06cd50568f5d4a172e0c41ef4292927a09467a284e2feb14a073a350615,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733793200028671668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e4d38f-b0fe-43cf-a844-ba787287fda6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55a7c60e436fac7a6376974b0733a09608016535ca692ffbe443e635d82d1170,PodSandboxId:df58a6a4b4b98b3d68c748f8692fd21feedd8a998cc8c452d8269fb561d49411,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793199649507753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bgjgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 277d23ef-ff20-414d-beb6-c6982712a423,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3ff20847120df12a65c14c8c95260d83dfd25ee36e9187815c6e7884387915,PodSandboxId:e7a4c1081cefac10a4214b4cc646e1891af84fcd7bf1d08728a6c8f72ef013b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793199583563499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m4qgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
1253d1b-c010-41e2-9286-e9930025e9ff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d5ee8060a1e08c1c4ba1136c0629a7997d6bd5a5df123954998284ae05253f,PodSandboxId:41e5ee0c3296f07af17418441ca07ee113fcc0cf3afd4441e81baf97c2edf92c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733793198861319737,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v28mz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd47cc1-a085-4e77-850d-dde0c8ed6054,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9ca46cabc94b73b3a51caad200e0e9799e4a3537f99e32ab2b0cd417c0ab7db,PodSandboxId:f8b897e6d6a537fa4d8ae1ed8b9dd80c864e68c576e328ea97aebb219a91a6cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733793188118660133,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f70de46867691833f10ec303c300c8f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8658835ca140bd6228a30fe8124de708f05d50d5def0b2056e923fa050514de7,PodSandboxId:a4e108c8f05dcb6039dc92adc6d26097a6f97ed0af7dcfc6ddb40d919f92adbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733793188077836579,Labels:map[string]st
ring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d1ab12683a6c965f20a7467f588bc94,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bebe7b8c93db1940dbf7a305b59b0b9c675c5d3c03a30e852340357e4835a284,PodSandboxId:1c0a6de6f1d87ea4dbb532ae95c689a60779d2c96adb0636b1e6e48bae719680,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733793188091155407,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e4ade909c7172eb501f725fba84f76e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29eefdbc8574b415ab0f94d95c3114d03615c96fef747dad1081739f33126932,PodSandboxId:767ea99b563004be5f52740ce343bfaaff2fe3467f32172416d92a5fb9212758,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733793188072241167,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b6b99b806e5357d10ab1acbd63fc7fa,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e90d02b1492860d29809638b72ff024a3ee9e206d3c17200ee51b1f3615c24,PodSandboxId:d49f09c506027d724ab68bc7880ba05cf66c71379a1bbe3fc06b669311c4fc08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733792906352704503,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f70de46867691833f10ec303c300c8f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81873dbc-10f2-43a0-a75b-c01b5a5c358c name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:50 embed-certs-274758 crio[718]: time="2024-12-10 01:28:50.368438673Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7ba23252-eae4-4669-8f33-f763afa450ec name=/runtime.v1.RuntimeService/Version
	Dec 10 01:28:50 embed-certs-274758 crio[718]: time="2024-12-10 01:28:50.368515026Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7ba23252-eae4-4669-8f33-f763afa450ec name=/runtime.v1.RuntimeService/Version
	Dec 10 01:28:50 embed-certs-274758 crio[718]: time="2024-12-10 01:28:50.369542584Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5c7f2d5c-eeae-41b0-b2d1-aae4812bfb5d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:28:50 embed-certs-274758 crio[718]: time="2024-12-10 01:28:50.369952428Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794130369886443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5c7f2d5c-eeae-41b0-b2d1-aae4812bfb5d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:28:50 embed-certs-274758 crio[718]: time="2024-12-10 01:28:50.370425568Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c0bb3ee-5a44-4b4b-a676-e79ee70db647 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:50 embed-certs-274758 crio[718]: time="2024-12-10 01:28:50.370485134Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9c0bb3ee-5a44-4b4b-a676-e79ee70db647 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:50 embed-certs-274758 crio[718]: time="2024-12-10 01:28:50.370670471Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:539ca3cb672dc5a83e86ae5c31705ee7c3e3d1d00d7f20c3e3d09db37492e97f,PodSandboxId:41e8d06cd50568f5d4a172e0c41ef4292927a09467a284e2feb14a073a350615,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733793200028671668,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71e4d38f-b0fe-43cf-a844-ba787287fda6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55a7c60e436fac7a6376974b0733a09608016535ca692ffbe443e635d82d1170,PodSandboxId:df58a6a4b4b98b3d68c748f8692fd21feedd8a998cc8c452d8269fb561d49411,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793199649507753,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bgjgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 277d23ef-ff20-414d-beb6-c6982712a423,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3ff20847120df12a65c14c8c95260d83dfd25ee36e9187815c6e7884387915,PodSandboxId:e7a4c1081cefac10a4214b4cc646e1891af84fcd7bf1d08728a6c8f72ef013b3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793199583563499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m4qgb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
1253d1b-c010-41e2-9286-e9930025e9ff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d5ee8060a1e08c1c4ba1136c0629a7997d6bd5a5df123954998284ae05253f,PodSandboxId:41e5ee0c3296f07af17418441ca07ee113fcc0cf3afd4441e81baf97c2edf92c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733793198861319737,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v28mz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cd47cc1-a085-4e77-850d-dde0c8ed6054,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9ca46cabc94b73b3a51caad200e0e9799e4a3537f99e32ab2b0cd417c0ab7db,PodSandboxId:f8b897e6d6a537fa4d8ae1ed8b9dd80c864e68c576e328ea97aebb219a91a6cb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733793188118660133,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f70de46867691833f10ec303c300c8f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8658835ca140bd6228a30fe8124de708f05d50d5def0b2056e923fa050514de7,PodSandboxId:a4e108c8f05dcb6039dc92adc6d26097a6f97ed0af7dcfc6ddb40d919f92adbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733793188077836579,Labels:map[string]st
ring{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d1ab12683a6c965f20a7467f588bc94,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bebe7b8c93db1940dbf7a305b59b0b9c675c5d3c03a30e852340357e4835a284,PodSandboxId:1c0a6de6f1d87ea4dbb532ae95c689a60779d2c96adb0636b1e6e48bae719680,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733793188091155407,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e4ade909c7172eb501f725fba84f76e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29eefdbc8574b415ab0f94d95c3114d03615c96fef747dad1081739f33126932,PodSandboxId:767ea99b563004be5f52740ce343bfaaff2fe3467f32172416d92a5fb9212758,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733793188072241167,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b6b99b806e5357d10ab1acbd63fc7fa,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9e90d02b1492860d29809638b72ff024a3ee9e206d3c17200ee51b1f3615c24,PodSandboxId:d49f09c506027d724ab68bc7880ba05cf66c71379a1bbe3fc06b669311c4fc08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733792906352704503,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-274758,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f70de46867691833f10ec303c300c8f,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9c0bb3ee-5a44-4b4b-a676-e79ee70db647 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	539ca3cb672dc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   41e8d06cd5056       storage-provisioner
	55a7c60e436fa       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   df58a6a4b4b98       coredns-7c65d6cfc9-bgjgh
	2b3ff20847120       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   e7a4c1081cefa       coredns-7c65d6cfc9-m4qgb
	75d5ee8060a1e       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   15 minutes ago      Running             kube-proxy                0                   41e5ee0c3296f       kube-proxy-v28mz
	d9ca46cabc94b       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   15 minutes ago      Running             kube-apiserver            2                   f8b897e6d6a53       kube-apiserver-embed-certs-274758
	bebe7b8c93db1       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   15 minutes ago      Running             kube-controller-manager   2                   1c0a6de6f1d87       kube-controller-manager-embed-certs-274758
	8658835ca140b       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   15 minutes ago      Running             kube-scheduler            2                   a4e108c8f05dc       kube-scheduler-embed-certs-274758
	29eefdbc8574b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago      Running             etcd                      2                   767ea99b56300       etcd-embed-certs-274758
	c9e90d02b1492       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   20 minutes ago      Exited              kube-apiserver            1                   d49f09c506027       kube-apiserver-embed-certs-274758
	
	
	==> coredns [2b3ff20847120df12a65c14c8c95260d83dfd25ee36e9187815c6e7884387915] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [55a7c60e436fac7a6376974b0733a09608016535ca692ffbe443e635d82d1170] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-274758
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-274758
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=embed-certs-274758
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_10T01_13_13_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 01:13:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-274758
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 01:28:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 01:28:42 +0000   Tue, 10 Dec 2024 01:13:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 01:28:42 +0000   Tue, 10 Dec 2024 01:13:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 01:28:42 +0000   Tue, 10 Dec 2024 01:13:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 01:28:42 +0000   Tue, 10 Dec 2024 01:13:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.76
	  Hostname:    embed-certs-274758
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 56378d021cd14668a888b76f8753656d
	  System UUID:                56378d02-1cd1-4668-a888-b76f8753656d
	  Boot ID:                    c417dfc5-e023-447a-a35b-9f030b1e0e21
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-bgjgh                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-7c65d6cfc9-m4qgb                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-embed-certs-274758                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kube-apiserver-embed-certs-274758             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-embed-certs-274758    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-v28mz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-embed-certs-274758             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-6867b74b74-mcw2c               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         15m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node embed-certs-274758 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node embed-certs-274758 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node embed-certs-274758 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m   node-controller  Node embed-certs-274758 event: Registered Node embed-certs-274758 in Controller
	
	
	==> dmesg <==
	[  +0.052068] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037237] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.773561] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.944413] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.537439] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.387611] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.066423] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074139] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.201898] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.124542] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.286624] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[  +3.900794] systemd-fstab-generator[803]: Ignoring "noauto" option for root device
	[  +2.037616] systemd-fstab-generator[925]: Ignoring "noauto" option for root device
	[  +0.058273] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.498688] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.936754] kauditd_printk_skb: 85 callbacks suppressed
	[Dec10 01:13] systemd-fstab-generator[2628]: Ignoring "noauto" option for root device
	[  +0.063011] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.993630] systemd-fstab-generator[2950]: Ignoring "noauto" option for root device
	[  +0.103391] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.259436] systemd-fstab-generator[3060]: Ignoring "noauto" option for root device
	[  +0.110117] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.824923] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [29eefdbc8574b415ab0f94d95c3114d03615c96fef747dad1081739f33126932] <==
	{"level":"info","ts":"2024-12-10T01:13:08.578019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6643fb104721b396 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-10T01:13:08.578041Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6643fb104721b396 received MsgPreVoteResp from 6643fb104721b396 at term 1"}
	{"level":"info","ts":"2024-12-10T01:13:08.578053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6643fb104721b396 became candidate at term 2"}
	{"level":"info","ts":"2024-12-10T01:13:08.578059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6643fb104721b396 received MsgVoteResp from 6643fb104721b396 at term 2"}
	{"level":"info","ts":"2024-12-10T01:13:08.578068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6643fb104721b396 became leader at term 2"}
	{"level":"info","ts":"2024-12-10T01:13:08.578075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6643fb104721b396 elected leader 6643fb104721b396 at term 2"}
	{"level":"info","ts":"2024-12-10T01:13:08.582138Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6643fb104721b396","local-member-attributes":"{Name:embed-certs-274758 ClientURLs:[https://192.168.72.76:2379]}","request-path":"/0/members/6643fb104721b396/attributes","cluster-id":"28d07f4d863f7a6f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-10T01:13:08.582335Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T01:13:08.582452Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-10T01:13:08.582826Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-10T01:13:08.584950Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-10T01:13:08.585067Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-10T01:13:08.586615Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-10T01:13:08.587124Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-10T01:13:08.591816Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-10T01:13:08.598038Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"28d07f4d863f7a6f","local-member-id":"6643fb104721b396","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T01:13:08.598118Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T01:13:08.598157Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T01:13:08.598502Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.76:2379"}
	{"level":"info","ts":"2024-12-10T01:23:08.830937Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":682}
	{"level":"info","ts":"2024-12-10T01:23:08.841068Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":682,"took":"9.507395ms","hash":2511447460,"current-db-size-bytes":2400256,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2400256,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-12-10T01:23:08.841178Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2511447460,"revision":682,"compact-revision":-1}
	{"level":"info","ts":"2024-12-10T01:28:08.837409Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":925}
	{"level":"info","ts":"2024-12-10T01:28:08.841531Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":925,"took":"3.445344ms","hash":938382442,"current-db-size-bytes":2400256,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1675264,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-12-10T01:28:08.841612Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":938382442,"revision":925,"compact-revision":682}
	
	
	==> kernel <==
	 01:28:50 up 20 min,  0 users,  load average: 0.03, 0.15, 0.17
	Linux embed-certs-274758 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c9e90d02b1492860d29809638b72ff024a3ee9e206d3c17200ee51b1f3615c24] <==
	W1210 01:13:04.138863       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.198497       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.254855       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.273349       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.298206       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.314961       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.501510       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.511178       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.540833       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.558473       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.593591       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.731336       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.779541       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.783244       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.784581       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.796123       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.828869       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.846580       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.912557       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:04.986773       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:05.023462       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:05.048234       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:05.054689       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:05.151796       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:05.226365       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d9ca46cabc94b73b3a51caad200e0e9799e4a3537f99e32ab2b0cd417c0ab7db] <==
	I1210 01:24:11.594202       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 01:24:11.595350       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 01:26:11.595441       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 01:26:11.595841       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1210 01:26:11.595731       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 01:26:11.596029       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1210 01:26:11.597625       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 01:26:11.597624       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 01:28:10.592584       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 01:28:10.592797       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1210 01:28:11.594195       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 01:28:11.594262       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1210 01:28:11.594330       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 01:28:11.594459       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 01:28:11.595419       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 01:28:11.595515       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [bebe7b8c93db1940dbf7a305b59b0b9c675c5d3c03a30e852340357e4835a284] <==
	E1210 01:23:47.662887       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:23:48.110490       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:24:17.669312       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:24:18.118729       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 01:24:22.238702       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="151.769µs"
	I1210 01:24:33.242736       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="88.057µs"
	E1210 01:24:47.675566       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:24:48.127561       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:25:17.682066       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:25:18.135557       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:25:47.688215       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:25:48.144408       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:26:17.694485       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:26:18.152706       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:26:47.700561       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:26:48.161177       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:27:17.708039       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:27:18.170249       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:27:47.713752       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:27:48.177574       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:28:17.719460       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:28:18.186805       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 01:28:42.440967       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-274758"
	E1210 01:28:47.727182       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:28:48.196404       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [75d5ee8060a1e08c1c4ba1136c0629a7997d6bd5a5df123954998284ae05253f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1210 01:13:19.173429       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1210 01:13:19.189652       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.76"]
	E1210 01:13:19.189736       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 01:13:19.264418       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1210 01:13:19.264472       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 01:13:19.264505       1 server_linux.go:169] "Using iptables Proxier"
	I1210 01:13:19.266809       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 01:13:19.267102       1 server.go:483] "Version info" version="v1.31.2"
	I1210 01:13:19.267137       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 01:13:19.268632       1 config.go:199] "Starting service config controller"
	I1210 01:13:19.268669       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1210 01:13:19.268703       1 config.go:105] "Starting endpoint slice config controller"
	I1210 01:13:19.268723       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1210 01:13:19.269272       1 config.go:328] "Starting node config controller"
	I1210 01:13:19.269301       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1210 01:13:19.368801       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1210 01:13:19.368861       1 shared_informer.go:320] Caches are synced for service config
	I1210 01:13:19.369666       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8658835ca140bd6228a30fe8124de708f05d50d5def0b2056e923fa050514de7] <==
	W1210 01:13:10.597122       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1210 01:13:10.597634       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:10.594650       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1210 01:13:10.597720       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:10.594686       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1210 01:13:10.597842       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:10.594724       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1210 01:13:10.597961       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:10.597194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1210 01:13:10.598028       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:10.597242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1210 01:13:10.598097       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:11.467827       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1210 01:13:11.468001       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:11.547837       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1210 01:13:11.548043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:11.598149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1210 01:13:11.598215       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:11.764402       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1210 01:13:11.764507       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:11.768152       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1210 01:13:11.768258       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:11.824273       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1210 01:13:11.824361       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1210 01:13:12.189487       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 10 01:27:34 embed-certs-274758 kubelet[2957]: E1210 01:27:34.225434    2957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mcw2c" podUID="a7b75933-124c-4577-b26a-ad1c5c128910"
	Dec 10 01:27:43 embed-certs-274758 kubelet[2957]: E1210 01:27:43.445411    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794063445187591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:27:43 embed-certs-274758 kubelet[2957]: E1210 01:27:43.445471    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794063445187591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:27:48 embed-certs-274758 kubelet[2957]: E1210 01:27:48.225270    2957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mcw2c" podUID="a7b75933-124c-4577-b26a-ad1c5c128910"
	Dec 10 01:27:53 embed-certs-274758 kubelet[2957]: E1210 01:27:53.446789    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794073446451919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:27:53 embed-certs-274758 kubelet[2957]: E1210 01:27:53.447127    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794073446451919,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:28:00 embed-certs-274758 kubelet[2957]: E1210 01:28:00.226138    2957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mcw2c" podUID="a7b75933-124c-4577-b26a-ad1c5c128910"
	Dec 10 01:28:03 embed-certs-274758 kubelet[2957]: E1210 01:28:03.449798    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794083449435345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:28:03 embed-certs-274758 kubelet[2957]: E1210 01:28:03.449823    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794083449435345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:28:13 embed-certs-274758 kubelet[2957]: E1210 01:28:13.242003    2957 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 10 01:28:13 embed-certs-274758 kubelet[2957]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 10 01:28:13 embed-certs-274758 kubelet[2957]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 10 01:28:13 embed-certs-274758 kubelet[2957]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 10 01:28:13 embed-certs-274758 kubelet[2957]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 10 01:28:13 embed-certs-274758 kubelet[2957]: E1210 01:28:13.452198    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794093451720465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:28:13 embed-certs-274758 kubelet[2957]: E1210 01:28:13.452235    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794093451720465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:28:15 embed-certs-274758 kubelet[2957]: E1210 01:28:15.226648    2957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mcw2c" podUID="a7b75933-124c-4577-b26a-ad1c5c128910"
	Dec 10 01:28:23 embed-certs-274758 kubelet[2957]: E1210 01:28:23.454102    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794103453661290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:28:23 embed-certs-274758 kubelet[2957]: E1210 01:28:23.454396    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794103453661290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:28:28 embed-certs-274758 kubelet[2957]: E1210 01:28:28.226178    2957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mcw2c" podUID="a7b75933-124c-4577-b26a-ad1c5c128910"
	Dec 10 01:28:33 embed-certs-274758 kubelet[2957]: E1210 01:28:33.455871    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794113455579714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:28:33 embed-certs-274758 kubelet[2957]: E1210 01:28:33.456030    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794113455579714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:28:41 embed-certs-274758 kubelet[2957]: E1210 01:28:41.226015    2957 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mcw2c" podUID="a7b75933-124c-4577-b26a-ad1c5c128910"
	Dec 10 01:28:43 embed-certs-274758 kubelet[2957]: E1210 01:28:43.457774    2957 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794123457536152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:28:43 embed-certs-274758 kubelet[2957]: E1210 01:28:43.457806    2957 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794123457536152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [539ca3cb672dc5a83e86ae5c31705ee7c3e3d1d00d7f20c3e3d09db37492e97f] <==
	I1210 01:13:20.120449       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 01:13:20.131144       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 01:13:20.131205       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1210 01:13:20.144731       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 01:13:20.145084       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-274758_2172c1a7-4a8d-4542-b234-bcd085cfb142!
	I1210 01:13:20.146637       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46948010-873c-4fb9-bdc6-c2b19cb378d9", APIVersion:"v1", ResourceVersion:"390", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-274758_2172c1a7-4a8d-4542-b234-bcd085cfb142 became leader
	I1210 01:13:20.245322       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-274758_2172c1a7-4a8d-4542-b234-bcd085cfb142!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-274758 -n embed-certs-274758
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-274758 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-mcw2c
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-274758 describe pod metrics-server-6867b74b74-mcw2c
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-274758 describe pod metrics-server-6867b74b74-mcw2c: exit status 1 (60.504253ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-mcw2c" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-274758 describe pod metrics-server-6867b74b74-mcw2c: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (378.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (323.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-584179 -n no-preload-584179
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-10 01:28:27.463880698 +0000 UTC m=+6300.543518844
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-584179 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-584179 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.912µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-584179 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-584179 -n no-preload-584179
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-584179 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-584179 logs -n 25: (2.362160959s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-094470                              | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-options-086522                                 | cert-options-086522          | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 00:58 UTC |
	| start   | -p no-preload-584179                                   | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 01:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-481624                           | kubernetes-upgrade-481624    | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 00:58 UTC |
	| start   | -p embed-certs-274758                                  | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 01:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-290541                              | cert-expiration-290541       | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-290541                              | cert-expiration-290541       | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-371895 | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | disable-driver-mounts-371895                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:02 UTC |
	|         | default-k8s-diff-port-901295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-584179             | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-584179                                   | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-274758            | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-274758                                  | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-901295  | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:02 UTC | 10 Dec 24 01:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:02 UTC |                     |
	|         | default-k8s-diff-port-901295                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-094470        | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-584179                  | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-274758                 | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-584179                                   | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC | 10 Dec 24 01:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-274758                                  | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC | 10 Dec 24 01:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-901295       | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-094470                              | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC | 10 Dec 24 01:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-094470             | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC | 10 Dec 24 01:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-094470                              | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC | 10 Dec 24 01:14 UTC |
	|         | default-k8s-diff-port-901295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/10 01:04:42
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 01:04:42.604554  133282 out.go:345] Setting OutFile to fd 1 ...
	I1210 01:04:42.604645  133282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:04:42.604652  133282 out.go:358] Setting ErrFile to fd 2...
	I1210 01:04:42.604657  133282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:04:42.604818  133282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 01:04:42.605325  133282 out.go:352] Setting JSON to false
	I1210 01:04:42.606230  133282 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10034,"bootTime":1733782649,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 01:04:42.606360  133282 start.go:139] virtualization: kvm guest
	I1210 01:04:42.608505  133282 out.go:177] * [default-k8s-diff-port-901295] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 01:04:42.609651  133282 notify.go:220] Checking for updates...
	I1210 01:04:42.609661  133282 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 01:04:42.610866  133282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 01:04:42.611986  133282 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:04:42.613055  133282 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 01:04:42.614094  133282 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 01:04:42.615160  133282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 01:04:42.616546  133282 config.go:182] Loaded profile config "default-k8s-diff-port-901295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:04:42.616942  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:04:42.617000  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:04:42.631861  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39355
	I1210 01:04:42.632399  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:04:42.632966  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:04:42.632988  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:04:42.633389  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:04:42.633558  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:04:42.633822  133282 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 01:04:42.634105  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:04:42.634139  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:04:42.648371  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38389
	I1210 01:04:42.648775  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:04:42.649217  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:04:42.649238  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:04:42.649580  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:04:42.649752  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:04:42.680926  133282 out.go:177] * Using the kvm2 driver based on existing profile
	I1210 01:04:42.682339  133282 start.go:297] selected driver: kvm2
	I1210 01:04:42.682365  133282 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:04:42.682487  133282 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 01:04:42.683148  133282 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 01:04:42.683220  133282 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 01:04:42.697586  133282 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1210 01:04:42.697938  133282 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:04:42.697970  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:04:42.698011  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:04:42.698042  133282 start.go:340] cluster config:
	{Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:04:42.698139  133282 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 01:04:42.699685  133282 out.go:177] * Starting "default-k8s-diff-port-901295" primary control-plane node in "default-k8s-diff-port-901295" cluster
	I1210 01:04:39.721352  133241 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1210 01:04:39.721383  133241 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1210 01:04:39.721392  133241 cache.go:56] Caching tarball of preloaded images
	I1210 01:04:39.721455  133241 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 01:04:39.721464  133241 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1210 01:04:39.721545  133241 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/config.json ...
	I1210 01:04:39.721707  133241 start.go:360] acquireMachinesLock for old-k8s-version-094470: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 01:04:44.574793  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:04:42.700760  133282 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:04:42.700790  133282 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1210 01:04:42.700799  133282 cache.go:56] Caching tarball of preloaded images
	I1210 01:04:42.700867  133282 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 01:04:42.700878  133282 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 01:04:42.700976  133282 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/config.json ...
	I1210 01:04:42.701136  133282 start.go:360] acquireMachinesLock for default-k8s-diff-port-901295: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 01:04:50.654849  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:04:53.726828  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:04:59.806818  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:02.878819  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:08.958855  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:12.030796  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:18.110838  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:21.182849  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:27.262801  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:30.334793  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:36.414830  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:39.486794  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:45.566825  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:48.639043  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:54.718789  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:57.790796  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:03.870824  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:06.942805  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:13.023037  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:16.094961  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:22.174798  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:25.246892  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:31.326818  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:34.398846  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:40.478809  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:43.550800  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:49.630777  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:52.702808  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:58.783007  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:01.854776  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:07.934835  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:11.006837  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:17.086805  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:20.158819  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:26.238836  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:29.311060  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:35.390827  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:38.462976  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:44.542806  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:47.614800  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:53.694819  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:56.766790  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:59.770632  132693 start.go:364] duration metric: took 4m32.843409632s to acquireMachinesLock for "embed-certs-274758"
	I1210 01:07:59.770698  132693 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:07:59.770705  132693 fix.go:54] fixHost starting: 
	I1210 01:07:59.771174  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:07:59.771226  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:07:59.787289  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45723
	I1210 01:07:59.787787  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:07:59.788234  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:07:59.788258  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:07:59.788645  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:07:59.788824  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:07:59.788958  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:07:59.790595  132693 fix.go:112] recreateIfNeeded on embed-certs-274758: state=Stopped err=<nil>
	I1210 01:07:59.790631  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	W1210 01:07:59.790790  132693 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:07:59.792515  132693 out.go:177] * Restarting existing kvm2 VM for "embed-certs-274758" ...
	I1210 01:07:59.793607  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Start
	I1210 01:07:59.793771  132693 main.go:141] libmachine: (embed-certs-274758) Ensuring networks are active...
	I1210 01:07:59.794532  132693 main.go:141] libmachine: (embed-certs-274758) Ensuring network default is active
	I1210 01:07:59.794864  132693 main.go:141] libmachine: (embed-certs-274758) Ensuring network mk-embed-certs-274758 is active
	I1210 01:07:59.795317  132693 main.go:141] libmachine: (embed-certs-274758) Getting domain xml...
	I1210 01:07:59.796099  132693 main.go:141] libmachine: (embed-certs-274758) Creating domain...
	I1210 01:08:00.982632  132693 main.go:141] libmachine: (embed-certs-274758) Waiting to get IP...
	I1210 01:08:00.983591  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:00.984037  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:00.984077  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:00.984002  133990 retry.go:31] will retry after 285.753383ms: waiting for machine to come up
	I1210 01:08:01.272035  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:01.272490  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:01.272514  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:01.272423  133990 retry.go:31] will retry after 309.245833ms: waiting for machine to come up
	I1210 01:08:01.582873  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:01.583336  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:01.583382  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:01.583288  133990 retry.go:31] will retry after 451.016986ms: waiting for machine to come up
	I1210 01:07:59.768336  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:07:59.768370  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:07:59.768666  132605 buildroot.go:166] provisioning hostname "no-preload-584179"
	I1210 01:07:59.768702  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:07:59.768894  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:07:59.770491  132605 machine.go:96] duration metric: took 4m37.429107505s to provisionDockerMachine
	I1210 01:07:59.770535  132605 fix.go:56] duration metric: took 4m37.448303416s for fixHost
	I1210 01:07:59.770542  132605 start.go:83] releasing machines lock for "no-preload-584179", held for 4m37.448340626s
	W1210 01:07:59.770589  132605 start.go:714] error starting host: provision: host is not running
	W1210 01:07:59.770743  132605 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1210 01:07:59.770759  132605 start.go:729] Will try again in 5 seconds ...
	I1210 01:08:02.035970  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:02.036421  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:02.036443  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:02.036382  133990 retry.go:31] will retry after 408.436756ms: waiting for machine to come up
	I1210 01:08:02.445970  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:02.446515  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:02.446550  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:02.446445  133990 retry.go:31] will retry after 612.819219ms: waiting for machine to come up
	I1210 01:08:03.061377  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:03.061850  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:03.061879  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:03.061795  133990 retry.go:31] will retry after 867.345457ms: waiting for machine to come up
	I1210 01:08:03.930866  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:03.931316  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:03.931340  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:03.931259  133990 retry.go:31] will retry after 758.429736ms: waiting for machine to come up
	I1210 01:08:04.691061  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:04.691480  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:04.691511  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:04.691430  133990 retry.go:31] will retry after 1.278419765s: waiting for machine to come up
	I1210 01:08:05.972206  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:05.972645  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:05.972677  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:05.972596  133990 retry.go:31] will retry after 1.726404508s: waiting for machine to come up
	I1210 01:08:04.770968  132605 start.go:360] acquireMachinesLock for no-preload-584179: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 01:08:07.700170  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:07.700593  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:07.700615  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:07.700544  133990 retry.go:31] will retry after 2.286681333s: waiting for machine to come up
	I1210 01:08:09.989072  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:09.989424  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:09.989447  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:09.989383  133990 retry.go:31] will retry after 2.723565477s: waiting for machine to come up
	I1210 01:08:12.716204  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:12.716656  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:12.716680  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:12.716618  133990 retry.go:31] will retry after 3.619683155s: waiting for machine to come up
	I1210 01:08:16.338854  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.339271  132693 main.go:141] libmachine: (embed-certs-274758) Found IP for machine: 192.168.72.76
	I1210 01:08:16.339301  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has current primary IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.339306  132693 main.go:141] libmachine: (embed-certs-274758) Reserving static IP address...
	I1210 01:08:16.339656  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "embed-certs-274758", mac: "52:54:00:d3:3c:b1", ip: "192.168.72.76"} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.339683  132693 main.go:141] libmachine: (embed-certs-274758) DBG | skip adding static IP to network mk-embed-certs-274758 - found existing host DHCP lease matching {name: "embed-certs-274758", mac: "52:54:00:d3:3c:b1", ip: "192.168.72.76"}
	I1210 01:08:16.339695  132693 main.go:141] libmachine: (embed-certs-274758) Reserved static IP address: 192.168.72.76
	I1210 01:08:16.339703  132693 main.go:141] libmachine: (embed-certs-274758) Waiting for SSH to be available...
	I1210 01:08:16.339715  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Getting to WaitForSSH function...
	I1210 01:08:16.341531  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.341776  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.341804  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.341963  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Using SSH client type: external
	I1210 01:08:16.341995  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa (-rw-------)
	I1210 01:08:16.342030  132693 main.go:141] libmachine: (embed-certs-274758) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.76 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:08:16.342047  132693 main.go:141] libmachine: (embed-certs-274758) DBG | About to run SSH command:
	I1210 01:08:16.342061  132693 main.go:141] libmachine: (embed-certs-274758) DBG | exit 0
	I1210 01:08:16.465930  132693 main.go:141] libmachine: (embed-certs-274758) DBG | SSH cmd err, output: <nil>: 
	I1210 01:08:16.466310  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetConfigRaw
	I1210 01:08:16.466921  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:16.469152  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.469472  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.469501  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.469754  132693 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/config.json ...
	I1210 01:08:16.469962  132693 machine.go:93] provisionDockerMachine start ...
	I1210 01:08:16.469982  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:16.470197  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.472368  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.472740  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.472765  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.472888  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.473052  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.473222  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.473325  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.473500  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:16.473737  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:16.473752  132693 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:08:16.581932  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:08:16.581963  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetMachineName
	I1210 01:08:16.582183  132693 buildroot.go:166] provisioning hostname "embed-certs-274758"
	I1210 01:08:16.582213  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetMachineName
	I1210 01:08:16.582412  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.584799  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.585092  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.585124  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.585264  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.585415  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.585568  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.585701  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.585836  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:16.586010  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:16.586026  132693 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-274758 && echo "embed-certs-274758" | sudo tee /etc/hostname
	I1210 01:08:16.707226  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-274758
	
	I1210 01:08:16.707260  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.709905  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.710192  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.710223  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.710428  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.710632  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.710828  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.710957  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.711127  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:16.711339  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:16.711356  132693 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-274758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-274758/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-274758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:08:17.578801  133241 start.go:364] duration metric: took 3m37.857041189s to acquireMachinesLock for "old-k8s-version-094470"
	I1210 01:08:17.578868  133241 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:08:17.578876  133241 fix.go:54] fixHost starting: 
	I1210 01:08:17.579295  133241 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:08:17.579353  133241 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:08:17.595770  133241 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46811
	I1210 01:08:17.596141  133241 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:08:17.596669  133241 main.go:141] libmachine: Using API Version  1
	I1210 01:08:17.596693  133241 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:08:17.597084  133241 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:08:17.597263  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:17.597405  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetState
	I1210 01:08:17.598931  133241 fix.go:112] recreateIfNeeded on old-k8s-version-094470: state=Stopped err=<nil>
	I1210 01:08:17.598957  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	W1210 01:08:17.599124  133241 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:08:17.600962  133241 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-094470" ...
	I1210 01:08:16.831001  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:08:16.831032  132693 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:08:16.831063  132693 buildroot.go:174] setting up certificates
	I1210 01:08:16.831074  132693 provision.go:84] configureAuth start
	I1210 01:08:16.831084  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetMachineName
	I1210 01:08:16.831362  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:16.833916  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.834282  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.834318  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.834446  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.836770  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.837061  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.837083  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.837216  132693 provision.go:143] copyHostCerts
	I1210 01:08:16.837284  132693 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:08:16.837303  132693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:08:16.837357  132693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:08:16.837447  132693 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:08:16.837455  132693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:08:16.837478  132693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:08:16.837528  132693 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:08:16.837535  132693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:08:16.837554  132693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:08:16.837609  132693 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.embed-certs-274758 san=[127.0.0.1 192.168.72.76 embed-certs-274758 localhost minikube]
	I1210 01:08:16.953590  132693 provision.go:177] copyRemoteCerts
	I1210 01:08:16.953649  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:08:16.953676  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.956012  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.956347  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.956384  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.956544  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.956703  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.956828  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.956951  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.039674  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:08:17.061125  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1210 01:08:17.082062  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 01:08:17.102519  132693 provision.go:87] duration metric: took 271.416512ms to configureAuth
	I1210 01:08:17.102554  132693 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:08:17.102745  132693 config.go:182] Loaded profile config "embed-certs-274758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:08:17.102858  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.105469  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.105818  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.105849  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.105976  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.106169  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.106326  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.106468  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.106639  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:17.106804  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:17.106817  132693 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:08:17.339841  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:08:17.339873  132693 machine.go:96] duration metric: took 869.895063ms to provisionDockerMachine
	I1210 01:08:17.339888  132693 start.go:293] postStartSetup for "embed-certs-274758" (driver="kvm2")
	I1210 01:08:17.339902  132693 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:08:17.339921  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.340256  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:08:17.340295  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.342633  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.342947  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.342973  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.343127  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.343294  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.343441  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.343545  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.428245  132693 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:08:17.432486  132693 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:08:17.432507  132693 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:08:17.432568  132693 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:08:17.432650  132693 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:08:17.432756  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:08:17.441892  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:17.464515  132693 start.go:296] duration metric: took 124.610801ms for postStartSetup
	I1210 01:08:17.464558  132693 fix.go:56] duration metric: took 17.693851707s for fixHost
	I1210 01:08:17.464592  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.467173  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.467470  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.467494  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.467622  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.467829  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.467976  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.468111  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.468253  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:17.468418  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:17.468429  132693 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:08:17.578630  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792897.551711245
	
	I1210 01:08:17.578653  132693 fix.go:216] guest clock: 1733792897.551711245
	I1210 01:08:17.578662  132693 fix.go:229] Guest: 2024-12-10 01:08:17.551711245 +0000 UTC Remote: 2024-12-10 01:08:17.464575547 +0000 UTC m=+290.672639525 (delta=87.135698ms)
	I1210 01:08:17.578690  132693 fix.go:200] guest clock delta is within tolerance: 87.135698ms
	I1210 01:08:17.578697  132693 start.go:83] releasing machines lock for "embed-certs-274758", held for 17.808018239s
	I1210 01:08:17.578727  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.578978  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:17.581740  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.582079  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.582105  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.582272  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.582792  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.582970  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.583053  132693 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:08:17.583108  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.583173  132693 ssh_runner.go:195] Run: cat /version.json
	I1210 01:08:17.583203  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.585727  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586056  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586096  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.586121  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586268  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.586447  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.586496  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.586525  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586661  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.586665  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.586853  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.586851  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.587016  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.587145  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.689525  132693 ssh_runner.go:195] Run: systemctl --version
	I1210 01:08:17.696586  132693 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:08:17.838483  132693 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:08:17.844291  132693 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:08:17.844381  132693 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:08:17.858838  132693 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:08:17.858864  132693 start.go:495] detecting cgroup driver to use...
	I1210 01:08:17.858926  132693 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:08:17.875144  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:08:17.887694  132693 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:08:17.887750  132693 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:08:17.900263  132693 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:08:17.916462  132693 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:08:18.050837  132693 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:08:18.237065  132693 docker.go:233] disabling docker service ...
	I1210 01:08:18.237134  132693 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:08:18.254596  132693 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:08:18.267028  132693 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:08:18.384379  132693 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:08:18.511930  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:08:18.525729  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:08:18.544642  132693 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 01:08:18.544693  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.555569  132693 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:08:18.555629  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.565952  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.575954  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.589571  132693 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:08:18.604400  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.615079  132693 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.631811  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.641877  132693 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:08:18.651229  132693 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:08:18.651284  132693 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:08:18.663922  132693 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:08:18.673755  132693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:18.804115  132693 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:08:18.902371  132693 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:08:18.902453  132693 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:08:18.906806  132693 start.go:563] Will wait 60s for crictl version
	I1210 01:08:18.906876  132693 ssh_runner.go:195] Run: which crictl
	I1210 01:08:18.910409  132693 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:08:18.957196  132693 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:08:18.957293  132693 ssh_runner.go:195] Run: crio --version
	I1210 01:08:18.983326  132693 ssh_runner.go:195] Run: crio --version
	I1210 01:08:19.021374  132693 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 01:08:17.602512  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .Start
	I1210 01:08:17.602729  133241 main.go:141] libmachine: (old-k8s-version-094470) Ensuring networks are active...
	I1210 01:08:17.603418  133241 main.go:141] libmachine: (old-k8s-version-094470) Ensuring network default is active
	I1210 01:08:17.603788  133241 main.go:141] libmachine: (old-k8s-version-094470) Ensuring network mk-old-k8s-version-094470 is active
	I1210 01:08:17.604284  133241 main.go:141] libmachine: (old-k8s-version-094470) Getting domain xml...
	I1210 01:08:17.605020  133241 main.go:141] libmachine: (old-k8s-version-094470) Creating domain...
	I1210 01:08:18.869767  133241 main.go:141] libmachine: (old-k8s-version-094470) Waiting to get IP...
	I1210 01:08:18.870786  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:18.871226  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:18.871282  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:18.871190  134112 retry.go:31] will retry after 260.195661ms: waiting for machine to come up
	I1210 01:08:19.132624  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:19.133091  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:19.133113  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:19.133034  134112 retry.go:31] will retry after 241.852579ms: waiting for machine to come up
	I1210 01:08:19.376814  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:19.377485  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:19.377520  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:19.377420  134112 retry.go:31] will retry after 410.574957ms: waiting for machine to come up
	I1210 01:08:19.023096  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:19.026231  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:19.026697  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:19.026740  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:19.026981  132693 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1210 01:08:19.031042  132693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:19.043510  132693 kubeadm.go:883] updating cluster {Name:embed-certs-274758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-274758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:08:19.043679  132693 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:08:19.043747  132693 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:19.075804  132693 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 01:08:19.075875  132693 ssh_runner.go:195] Run: which lz4
	I1210 01:08:19.079498  132693 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 01:08:19.083365  132693 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 01:08:19.083394  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1210 01:08:20.282126  132693 crio.go:462] duration metric: took 1.202670831s to copy over tarball
	I1210 01:08:20.282224  132693 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 01:08:19.790282  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:19.790868  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:19.790898  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:19.790828  134112 retry.go:31] will retry after 535.183165ms: waiting for machine to come up
	I1210 01:08:20.327434  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:20.327936  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:20.327972  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:20.327862  134112 retry.go:31] will retry after 729.193633ms: waiting for machine to come up
	I1210 01:08:21.058815  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:21.059274  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:21.059302  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:21.059224  134112 retry.go:31] will retry after 578.788415ms: waiting for machine to come up
	I1210 01:08:21.640036  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:21.640572  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:21.640604  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:21.640523  134112 retry.go:31] will retry after 1.113559472s: waiting for machine to come up
	I1210 01:08:22.755259  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:22.755716  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:22.755741  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:22.755681  134112 retry.go:31] will retry after 940.416935ms: waiting for machine to come up
	I1210 01:08:23.698216  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:23.698652  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:23.698684  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:23.698608  134112 retry.go:31] will retry after 1.575038679s: waiting for machine to come up
	I1210 01:08:22.359701  132693 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.077440918s)
	I1210 01:08:22.359757  132693 crio.go:469] duration metric: took 2.077602088s to extract the tarball
	I1210 01:08:22.359770  132693 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 01:08:22.404915  132693 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:22.444497  132693 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 01:08:22.444531  132693 cache_images.go:84] Images are preloaded, skipping loading
	I1210 01:08:22.444543  132693 kubeadm.go:934] updating node { 192.168.72.76 8443 v1.31.2 crio true true} ...
	I1210 01:08:22.444702  132693 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-274758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-274758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:08:22.444801  132693 ssh_runner.go:195] Run: crio config
	I1210 01:08:22.484278  132693 cni.go:84] Creating CNI manager for ""
	I1210 01:08:22.484301  132693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:08:22.484311  132693 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:08:22.484345  132693 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.76 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-274758 NodeName:embed-certs-274758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.76"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.76 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 01:08:22.484508  132693 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.76
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-274758"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.76"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.76"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:08:22.484573  132693 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 01:08:22.493746  132693 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:08:22.493827  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:08:22.503898  132693 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1210 01:08:22.520349  132693 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:08:22.536653  132693 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1210 01:08:22.553389  132693 ssh_runner.go:195] Run: grep 192.168.72.76	control-plane.minikube.internal$ /etc/hosts
	I1210 01:08:22.556933  132693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.76	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:22.569060  132693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:22.709124  132693 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:08:22.728316  132693 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758 for IP: 192.168.72.76
	I1210 01:08:22.728342  132693 certs.go:194] generating shared ca certs ...
	I1210 01:08:22.728382  132693 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:08:22.728564  132693 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:08:22.728619  132693 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:08:22.728633  132693 certs.go:256] generating profile certs ...
	I1210 01:08:22.728764  132693 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/client.key
	I1210 01:08:22.728852  132693 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/apiserver.key.ec69c041
	I1210 01:08:22.728906  132693 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/proxy-client.key
	I1210 01:08:22.729067  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:08:22.729121  132693 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:08:22.729144  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:08:22.729186  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:08:22.729223  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:08:22.729254  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:08:22.729313  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:22.730259  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:08:22.786992  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:08:22.813486  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:08:22.840236  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:08:22.870078  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1210 01:08:22.896484  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 01:08:22.917547  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:08:22.940550  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 01:08:22.964784  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:08:22.987389  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:08:23.009860  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:08:23.032300  132693 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:08:23.048611  132693 ssh_runner.go:195] Run: openssl version
	I1210 01:08:23.053927  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:08:23.064731  132693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:08:23.068872  132693 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:08:23.068917  132693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:08:23.074207  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:08:23.085278  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:08:23.096087  132693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:23.100106  132693 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:23.100155  132693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:23.105408  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:08:23.114862  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:08:23.124112  132693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:08:23.127915  132693 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:08:23.127958  132693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:08:23.132972  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:08:23.142672  132693 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:08:23.146554  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:08:23.152071  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:08:23.157606  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:08:23.162974  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:08:23.168059  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:08:23.173354  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:08:23.178612  132693 kubeadm.go:392] StartCluster: {Name:embed-certs-274758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-274758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:08:23.178733  132693 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:08:23.178788  132693 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:23.214478  132693 cri.go:89] found id: ""
	I1210 01:08:23.214545  132693 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:08:23.223871  132693 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:08:23.223897  132693 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:08:23.223956  132693 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:08:23.232839  132693 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:08:23.233836  132693 kubeconfig.go:125] found "embed-certs-274758" server: "https://192.168.72.76:8443"
	I1210 01:08:23.235958  132693 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:08:23.244484  132693 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.76
	I1210 01:08:23.244517  132693 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:08:23.244529  132693 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:08:23.244578  132693 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:23.282997  132693 cri.go:89] found id: ""
	I1210 01:08:23.283063  132693 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:08:23.298971  132693 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:08:23.307664  132693 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:08:23.307690  132693 kubeadm.go:157] found existing configuration files:
	
	I1210 01:08:23.307739  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:08:23.316208  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:08:23.316259  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:08:23.324410  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:08:23.332254  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:08:23.332303  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:08:23.340482  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:08:23.348584  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:08:23.348636  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:08:23.356760  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:08:23.364508  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:08:23.364564  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:08:23.372644  132693 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:08:23.380791  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:23.481384  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.558104  132693 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.076675674s)
	I1210 01:08:24.558155  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.743002  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.812833  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.910903  132693 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:08:24.911007  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:25.411815  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:25.911457  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:26.411340  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:25.276751  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:25.277027  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:25.277058  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:25.276996  134112 retry.go:31] will retry after 1.531276871s: waiting for machine to come up
	I1210 01:08:26.809860  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:26.810332  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:26.810365  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:26.810270  134112 retry.go:31] will retry after 2.029725217s: waiting for machine to come up
	I1210 01:08:28.842419  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:28.842945  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:28.842979  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:28.842895  134112 retry.go:31] will retry after 2.777752063s: waiting for machine to come up
	I1210 01:08:26.911681  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:26.925244  132693 api_server.go:72] duration metric: took 2.014341005s to wait for apiserver process to appear ...
	I1210 01:08:26.925276  132693 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:08:26.925307  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:29.461167  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:08:29.461199  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:08:29.461221  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:29.490907  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:08:29.490935  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:08:29.925947  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:29.938161  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:08:29.938197  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:08:30.425822  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:30.448700  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:08:30.448741  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:08:30.926368  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:30.930770  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 200:
	ok
	I1210 01:08:30.936664  132693 api_server.go:141] control plane version: v1.31.2
	I1210 01:08:30.936706  132693 api_server.go:131] duration metric: took 4.011421056s to wait for apiserver health ...
	I1210 01:08:30.936719  132693 cni.go:84] Creating CNI manager for ""
	I1210 01:08:30.936731  132693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:08:30.938509  132693 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:08:30.939651  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:08:30.949390  132693 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:08:30.973739  132693 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:08:30.988397  132693 system_pods.go:59] 8 kube-system pods found
	I1210 01:08:30.988441  132693 system_pods.go:61] "coredns-7c65d6cfc9-g98k2" [4358eb5a-fa28-405d-b6a4-66d232c1b060] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 01:08:30.988451  132693 system_pods.go:61] "etcd-embed-certs-274758" [11343776-d268-428f-9af8-4d20e4c1dda4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 01:08:30.988461  132693 system_pods.go:61] "kube-apiserver-embed-certs-274758" [c60d7a8e-e029-47ec-8f9d-5531aaeeb595] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 01:08:30.988471  132693 system_pods.go:61] "kube-controller-manager-embed-certs-274758" [53c0e257-c3c1-410b-8ce5-8350530160c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 01:08:30.988478  132693 system_pods.go:61] "kube-proxy-d29zg" [cbf2dba9-1c85-4e21-bf0b-01cf3fcd00df] Running
	I1210 01:08:30.988503  132693 system_pods.go:61] "kube-scheduler-embed-certs-274758" [6ecaa7c9-f7b6-450d-941c-8ccf582af275] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 01:08:30.988516  132693 system_pods.go:61] "metrics-server-6867b74b74-mhxtf" [2874a85a-c957-4056-b60e-be170f3c1ab2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:08:30.988527  132693 system_pods.go:61] "storage-provisioner" [7e2b93e2-0f25-4bb1-bca6-02a8ea5336ed] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 01:08:30.988539  132693 system_pods.go:74] duration metric: took 14.779044ms to wait for pod list to return data ...
	I1210 01:08:30.988567  132693 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:08:30.993600  132693 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:08:30.993632  132693 node_conditions.go:123] node cpu capacity is 2
	I1210 01:08:30.993652  132693 node_conditions.go:105] duration metric: took 5.074866ms to run NodePressure ...
	I1210 01:08:30.993680  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:31.251140  132693 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 01:08:31.254339  132693 kubeadm.go:739] kubelet initialised
	I1210 01:08:31.254358  132693 kubeadm.go:740] duration metric: took 3.193934ms waiting for restarted kubelet to initialise ...
	I1210 01:08:31.254367  132693 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:08:31.259628  132693 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.264379  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.264406  132693 pod_ready.go:82] duration metric: took 4.746678ms for pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.264417  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.264434  132693 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.268773  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "etcd-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.268794  132693 pod_ready.go:82] duration metric: took 4.345772ms for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.268804  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "etcd-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.268812  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.272890  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.272911  132693 pod_ready.go:82] duration metric: took 4.087379ms for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.272921  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.272929  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.377990  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.378020  132693 pod_ready.go:82] duration metric: took 105.077792ms for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.378033  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.378041  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-d29zg" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.777563  132693 pod_ready.go:93] pod "kube-proxy-d29zg" in "kube-system" namespace has status "Ready":"True"
	I1210 01:08:31.777584  132693 pod_ready.go:82] duration metric: took 399.533068ms for pod "kube-proxy-d29zg" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.777598  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.623742  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:31.624253  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:31.624289  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:31.624189  134112 retry.go:31] will retry after 3.852910592s: waiting for machine to come up
	I1210 01:08:36.766538  133282 start.go:364] duration metric: took 3m54.06534367s to acquireMachinesLock for "default-k8s-diff-port-901295"
	I1210 01:08:36.766623  133282 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:08:36.766636  133282 fix.go:54] fixHost starting: 
	I1210 01:08:36.767069  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:08:36.767139  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:08:36.785475  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41443
	I1210 01:08:36.786023  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:08:36.786614  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:08:36.786640  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:08:36.786956  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:08:36.787147  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:36.787295  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:08:36.788719  133282 fix.go:112] recreateIfNeeded on default-k8s-diff-port-901295: state=Stopped err=<nil>
	I1210 01:08:36.788745  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	W1210 01:08:36.788889  133282 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:08:36.791479  133282 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-901295" ...
	I1210 01:08:33.784092  132693 pod_ready.go:103] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:35.784732  132693 pod_ready.go:103] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:36.792712  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Start
	I1210 01:08:36.792883  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Ensuring networks are active...
	I1210 01:08:36.793559  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Ensuring network default is active
	I1210 01:08:36.793891  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Ensuring network mk-default-k8s-diff-port-901295 is active
	I1210 01:08:36.794354  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Getting domain xml...
	I1210 01:08:36.795038  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Creating domain...
	I1210 01:08:35.480373  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.480901  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has current primary IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.480926  133241 main.go:141] libmachine: (old-k8s-version-094470) Found IP for machine: 192.168.61.11
	I1210 01:08:35.480955  133241 main.go:141] libmachine: (old-k8s-version-094470) Reserving static IP address...
	I1210 01:08:35.481323  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "old-k8s-version-094470", mac: "52:54:00:00:f3:52", ip: "192.168.61.11"} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.481352  133241 main.go:141] libmachine: (old-k8s-version-094470) Reserved static IP address: 192.168.61.11
	I1210 01:08:35.481370  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | skip adding static IP to network mk-old-k8s-version-094470 - found existing host DHCP lease matching {name: "old-k8s-version-094470", mac: "52:54:00:00:f3:52", ip: "192.168.61.11"}
	I1210 01:08:35.481392  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | Getting to WaitForSSH function...
	I1210 01:08:35.481408  133241 main.go:141] libmachine: (old-k8s-version-094470) Waiting for SSH to be available...
	I1210 01:08:35.483785  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.484269  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.484314  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.484458  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | Using SSH client type: external
	I1210 01:08:35.484493  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa (-rw-------)
	I1210 01:08:35.484526  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:08:35.484548  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | About to run SSH command:
	I1210 01:08:35.484557  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | exit 0
	I1210 01:08:35.610216  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | SSH cmd err, output: <nil>: 
	I1210 01:08:35.610554  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetConfigRaw
	I1210 01:08:35.611179  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:35.613811  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.614184  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.614221  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.614448  133241 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/config.json ...
	I1210 01:08:35.614659  133241 machine.go:93] provisionDockerMachine start ...
	I1210 01:08:35.614681  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:35.614861  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.616965  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.617478  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.617507  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.617606  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:35.617741  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.617880  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.617993  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:35.618166  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:35.618416  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:35.618431  133241 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:08:35.730293  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:08:35.730326  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 01:08:35.730614  133241 buildroot.go:166] provisioning hostname "old-k8s-version-094470"
	I1210 01:08:35.730647  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 01:08:35.730902  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.733604  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.733943  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.733963  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.734110  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:35.734290  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.734436  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.734589  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:35.734737  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:35.734921  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:35.734937  133241 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-094470 && echo "old-k8s-version-094470" | sudo tee /etc/hostname
	I1210 01:08:35.856219  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-094470
	
	I1210 01:08:35.856272  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.859777  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.860157  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.860194  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.860364  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:35.860590  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.860808  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.860948  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:35.861145  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:35.861370  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:35.861391  133241 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-094470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-094470/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-094470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:08:35.984487  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:08:35.984523  133241 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:08:35.984571  133241 buildroot.go:174] setting up certificates
	I1210 01:08:35.984585  133241 provision.go:84] configureAuth start
	I1210 01:08:35.984596  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 01:08:35.984888  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:35.987515  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.987891  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.987920  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.988078  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.990428  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.990806  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.990838  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.991028  133241 provision.go:143] copyHostCerts
	I1210 01:08:35.991108  133241 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:08:35.991125  133241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:08:35.991208  133241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:08:35.991378  133241 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:08:35.991396  133241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:08:35.991436  133241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:08:35.991548  133241 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:08:35.991560  133241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:08:35.991593  133241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:08:35.991684  133241 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-094470 san=[127.0.0.1 192.168.61.11 localhost minikube old-k8s-version-094470]
	I1210 01:08:36.166767  133241 provision.go:177] copyRemoteCerts
	I1210 01:08:36.166825  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:08:36.166872  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.169777  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.170166  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.170196  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.170452  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.170662  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.170837  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.170985  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.255600  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:08:36.277974  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1210 01:08:36.299608  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 01:08:36.320325  133241 provision.go:87] duration metric: took 335.730286ms to configureAuth
	I1210 01:08:36.320346  133241 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:08:36.320502  133241 config.go:182] Loaded profile config "old-k8s-version-094470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1210 01:08:36.320572  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.323358  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.323810  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.323836  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.324012  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.324213  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.324351  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.324479  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.324608  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:36.324773  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:36.324789  133241 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:08:36.538020  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:08:36.538052  133241 machine.go:96] duration metric: took 923.37742ms to provisionDockerMachine
	I1210 01:08:36.538065  133241 start.go:293] postStartSetup for "old-k8s-version-094470" (driver="kvm2")
	I1210 01:08:36.538075  133241 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:08:36.538092  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.538437  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:08:36.538473  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.540836  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.541187  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.541229  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.541400  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.541594  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.541728  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.541852  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.623740  133241 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:08:36.627323  133241 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:08:36.627343  133241 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:08:36.627405  133241 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:08:36.627487  133241 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:08:36.627568  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:08:36.635720  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:36.656793  133241 start.go:296] duration metric: took 118.715633ms for postStartSetup
	I1210 01:08:36.656832  133241 fix.go:56] duration metric: took 19.077955657s for fixHost
	I1210 01:08:36.656853  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.659288  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.659586  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.659618  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.659772  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.659961  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.660132  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.660250  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.660391  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:36.660552  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:36.660562  133241 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:08:36.766355  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792916.738645658
	
	I1210 01:08:36.766375  133241 fix.go:216] guest clock: 1733792916.738645658
	I1210 01:08:36.766382  133241 fix.go:229] Guest: 2024-12-10 01:08:36.738645658 +0000 UTC Remote: 2024-12-10 01:08:36.656836618 +0000 UTC m=+237.074026661 (delta=81.80904ms)
	I1210 01:08:36.766420  133241 fix.go:200] guest clock delta is within tolerance: 81.80904ms
	I1210 01:08:36.766429  133241 start.go:83] releasing machines lock for "old-k8s-version-094470", held for 19.187587757s
	I1210 01:08:36.766461  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.766761  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:36.769758  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.770129  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.770150  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.770309  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.770818  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.770992  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.771090  133241 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:08:36.771157  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.771182  133241 ssh_runner.go:195] Run: cat /version.json
	I1210 01:08:36.771203  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.773923  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774103  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774272  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.774292  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774434  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.774545  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.774585  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774616  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.774728  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.774817  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.774843  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.774975  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.775004  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.775148  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.875634  133241 ssh_runner.go:195] Run: systemctl --version
	I1210 01:08:36.880774  133241 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:08:37.023282  133241 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:08:37.029380  133241 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:08:37.029436  133241 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:08:37.044071  133241 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:08:37.044093  133241 start.go:495] detecting cgroup driver to use...
	I1210 01:08:37.044157  133241 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:08:37.058626  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:08:37.070607  133241 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:08:37.070659  133241 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:08:37.086913  133241 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:08:37.102676  133241 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:08:37.221862  133241 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:08:37.373086  133241 docker.go:233] disabling docker service ...
	I1210 01:08:37.373166  133241 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:08:37.386711  133241 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:08:37.399414  133241 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:08:37.546237  133241 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:08:37.660681  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:08:37.673736  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:08:37.690107  133241 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1210 01:08:37.690180  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.700871  133241 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:08:37.700920  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.711545  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.722078  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.732603  133241 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:08:37.743617  133241 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:08:37.753641  133241 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:08:37.753699  133241 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:08:37.765737  133241 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:08:37.774173  133241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:37.891188  133241 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:08:37.983170  133241 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:08:37.983248  133241 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:08:37.987987  133241 start.go:563] Will wait 60s for crictl version
	I1210 01:08:37.988049  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:37.993150  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:08:38.045191  133241 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:08:38.045281  133241 ssh_runner.go:195] Run: crio --version
	I1210 01:08:38.071768  133241 ssh_runner.go:195] Run: crio --version
	I1210 01:08:38.100869  133241 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1210 01:08:38.102141  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:38.104790  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:38.105112  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:38.105143  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:38.105337  133241 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1210 01:08:38.109454  133241 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:38.120925  133241 kubeadm.go:883] updating cluster {Name:old-k8s-version-094470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:08:38.121060  133241 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1210 01:08:38.121130  133241 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:38.169400  133241 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1210 01:08:38.169462  133241 ssh_runner.go:195] Run: which lz4
	I1210 01:08:38.172973  133241 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 01:08:38.176684  133241 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 01:08:38.176715  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1210 01:08:38.285566  132693 pod_ready.go:103] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:38.784437  132693 pod_ready.go:93] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:08:38.784470  132693 pod_ready.go:82] duration metric: took 7.006865777s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:38.784480  132693 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:40.791489  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:38.076463  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting to get IP...
	I1210 01:08:38.077256  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.077650  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.077706  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:38.077616  134254 retry.go:31] will retry after 287.089061ms: waiting for machine to come up
	I1210 01:08:38.366347  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.366906  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.366937  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:38.366866  134254 retry.go:31] will retry after 359.654145ms: waiting for machine to come up
	I1210 01:08:38.728592  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.729111  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.729144  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:38.729048  134254 retry.go:31] will retry after 299.617496ms: waiting for machine to come up
	I1210 01:08:39.030785  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.031359  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.031382  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:39.031312  134254 retry.go:31] will retry after 586.950887ms: waiting for machine to come up
	I1210 01:08:39.620247  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.620872  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.620903  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:39.620802  134254 retry.go:31] will retry after 623.103267ms: waiting for machine to come up
	I1210 01:08:40.245322  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.245640  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.245669  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:40.245600  134254 retry.go:31] will retry after 712.603102ms: waiting for machine to come up
	I1210 01:08:40.960316  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.960862  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.960892  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:40.960806  134254 retry.go:31] will retry after 999.356089ms: waiting for machine to come up
	I1210 01:08:41.961395  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:41.961902  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:41.961929  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:41.961862  134254 retry.go:31] will retry after 1.050049361s: waiting for machine to come up
	I1210 01:08:39.654620  133241 crio.go:462] duration metric: took 1.481673499s to copy over tarball
	I1210 01:08:39.654705  133241 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 01:08:42.473447  133241 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.818699717s)
	I1210 01:08:42.473486  133241 crio.go:469] duration metric: took 2.818833041s to extract the tarball
	I1210 01:08:42.473496  133241 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 01:08:42.514635  133241 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:42.546161  133241 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1210 01:08:42.546204  133241 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 01:08:42.546276  133241 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:08:42.546315  133241 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.546339  133241 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.546344  133241 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.546347  133241 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.546306  133241 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1210 01:08:42.546372  133241 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.546315  133241 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.548134  133241 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.548150  133241 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1210 01:08:42.548149  133241 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.548162  133241 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:08:42.548134  133241 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.548135  133241 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.548138  133241 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.548326  133241 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.700402  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.706096  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.716669  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.717025  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.723380  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.727890  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.740867  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1210 01:08:42.775300  133241 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1210 01:08:42.775345  133241 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.775393  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.827802  133241 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1210 01:08:42.827855  133241 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.827873  133241 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1210 01:08:42.827906  133241 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.827936  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.827953  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.851952  133241 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1210 01:08:42.851998  133241 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.852063  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872369  133241 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1210 01:08:42.872408  133241 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.872446  133241 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1210 01:08:42.872479  133241 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1210 01:08:42.872489  133241 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.872497  133241 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1210 01:08:42.872516  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.872535  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872535  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872458  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872578  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.872638  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.872672  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.952963  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.952964  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 01:08:42.956464  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.956535  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.956580  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.956614  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.956681  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:43.035636  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 01:08:43.086938  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:43.087032  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:43.104765  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:43.104844  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:43.104891  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 01:08:43.109871  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:43.122137  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 01:08:43.193838  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:43.256301  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:43.256342  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1210 01:08:43.256431  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1210 01:08:43.258819  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1210 01:08:43.258928  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1210 01:08:43.259011  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1210 01:08:43.281411  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1210 01:08:43.300319  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1210 01:08:43.334327  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:08:43.478183  133241 cache_images.go:92] duration metric: took 931.957836ms to LoadCachedImages
	W1210 01:08:43.478292  133241 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1210 01:08:43.478310  133241 kubeadm.go:934] updating node { 192.168.61.11 8443 v1.20.0 crio true true} ...
	I1210 01:08:43.478501  133241 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-094470 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:08:43.478610  133241 ssh_runner.go:195] Run: crio config
	I1210 01:08:43.523627  133241 cni.go:84] Creating CNI manager for ""
	I1210 01:08:43.523651  133241 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:08:43.523660  133241 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:08:43.523680  133241 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.11 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-094470 NodeName:old-k8s-version-094470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1210 01:08:43.523872  133241 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-094470"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:08:43.523947  133241 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1210 01:08:43.534926  133241 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:08:43.535015  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:08:43.544420  133241 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1210 01:08:43.561582  133241 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:08:43.578427  133241 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1210 01:08:43.595593  133241 ssh_runner.go:195] Run: grep 192.168.61.11	control-plane.minikube.internal$ /etc/hosts
	I1210 01:08:43.599137  133241 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:43.610483  133241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:43.750543  133241 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:08:43.766573  133241 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470 for IP: 192.168.61.11
	I1210 01:08:43.766599  133241 certs.go:194] generating shared ca certs ...
	I1210 01:08:43.766628  133241 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:08:43.766828  133241 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:08:43.766881  133241 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:08:43.766897  133241 certs.go:256] generating profile certs ...
	I1210 01:08:43.767022  133241 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.key
	I1210 01:08:43.767097  133241 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.key.11e7a196
	I1210 01:08:43.767158  133241 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.key
	I1210 01:08:43.767318  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:08:43.767359  133241 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:08:43.767391  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:08:43.767428  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:08:43.767461  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:08:43.767502  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:08:43.767554  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:43.768599  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:08:43.825215  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:08:43.852218  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:08:43.888256  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:08:43.921633  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1210 01:08:43.954815  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 01:08:43.986660  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:08:44.009065  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 01:08:44.030476  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:08:44.053232  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:08:44.078371  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:08:44.100076  133241 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:08:44.115731  133241 ssh_runner.go:195] Run: openssl version
	I1210 01:08:44.121192  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:08:44.130554  133241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:08:44.134639  133241 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:08:44.134697  133241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:08:44.140323  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:08:44.150593  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:08:44.160638  133241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:44.165053  133241 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:44.165121  133241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:44.170391  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:08:44.180113  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:08:44.189938  133241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:08:44.193880  133241 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:08:44.193931  133241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:08:44.199419  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:08:44.209346  133241 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:08:44.213474  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:08:44.218965  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:08:44.224344  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:08:44.229835  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:08:44.235365  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:08:44.240697  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:08:44.245999  133241 kubeadm.go:392] StartCluster: {Name:old-k8s-version-094470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:08:44.246102  133241 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:08:44.246164  133241 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:44.287050  133241 cri.go:89] found id: ""
	I1210 01:08:44.287167  133241 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:08:44.297028  133241 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:08:44.297044  133241 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:08:44.297092  133241 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:08:44.306118  133241 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:08:44.307143  133241 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-094470" does not appear in /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:08:44.307777  133241 kubeconfig.go:62] /home/jenkins/minikube-integration/20062-79135/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-094470" cluster setting kubeconfig missing "old-k8s-version-094470" context setting]
	I1210 01:08:44.308663  133241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:08:44.394164  133241 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:08:44.406683  133241 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.11
	I1210 01:08:44.406723  133241 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:08:44.406739  133241 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:08:44.406799  133241 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:44.444917  133241 cri.go:89] found id: ""
	I1210 01:08:44.444995  133241 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:08:44.465693  133241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:08:44.475399  133241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:08:44.475424  133241 kubeadm.go:157] found existing configuration files:
	
	I1210 01:08:44.475482  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:08:44.483802  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:08:44.483844  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:08:44.492395  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:08:44.501080  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:08:44.501141  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:08:44.509973  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:08:44.518103  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:08:44.518176  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:08:44.527145  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:08:44.535124  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:08:44.535179  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:08:44.543773  133241 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:08:44.552533  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:42.791894  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:45.934242  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:43.013971  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:43.014430  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:43.014467  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:43.014369  134254 retry.go:31] will retry after 1.273602138s: waiting for machine to come up
	I1210 01:08:44.289131  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:44.289686  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:44.289720  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:44.289616  134254 retry.go:31] will retry after 1.911761795s: waiting for machine to come up
	I1210 01:08:46.203851  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:46.204263  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:46.204321  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:46.204199  134254 retry.go:31] will retry after 2.653257729s: waiting for machine to come up
	I1210 01:08:44.667527  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.368529  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.572674  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.671006  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.759483  133241 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:08:45.759588  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:46.260599  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:46.759851  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:47.260403  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:47.760555  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:48.259665  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:48.760390  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:49.260450  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:48.292324  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:50.789665  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:48.859690  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:48.860078  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:48.860108  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:48.860029  134254 retry.go:31] will retry after 3.186060231s: waiting for machine to come up
	I1210 01:08:52.048071  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:52.048524  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:52.048554  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:52.048478  134254 retry.go:31] will retry after 2.823038983s: waiting for machine to come up
	I1210 01:08:49.759795  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:50.260493  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:50.760146  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:51.259783  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:51.760554  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:52.260543  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:52.760452  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:53.260523  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:53.759677  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:54.259750  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:56.158844  132605 start.go:364] duration metric: took 51.38781342s to acquireMachinesLock for "no-preload-584179"
	I1210 01:08:56.158913  132605 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:08:56.158923  132605 fix.go:54] fixHost starting: 
	I1210 01:08:56.159339  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:08:56.159381  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:08:56.178552  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34315
	I1210 01:08:56.178997  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:08:56.179471  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:08:56.179497  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:08:56.179803  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:08:56.179977  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:08:56.180119  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:08:56.181496  132605 fix.go:112] recreateIfNeeded on no-preload-584179: state=Stopped err=<nil>
	I1210 01:08:56.181521  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	W1210 01:08:56.181661  132605 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:08:56.183508  132605 out.go:177] * Restarting existing kvm2 VM for "no-preload-584179" ...
	I1210 01:08:52.790210  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:54.790515  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:56.184725  132605 main.go:141] libmachine: (no-preload-584179) Calling .Start
	I1210 01:08:56.184883  132605 main.go:141] libmachine: (no-preload-584179) Ensuring networks are active...
	I1210 01:08:56.185680  132605 main.go:141] libmachine: (no-preload-584179) Ensuring network default is active
	I1210 01:08:56.186043  132605 main.go:141] libmachine: (no-preload-584179) Ensuring network mk-no-preload-584179 is active
	I1210 01:08:56.186427  132605 main.go:141] libmachine: (no-preload-584179) Getting domain xml...
	I1210 01:08:56.187126  132605 main.go:141] libmachine: (no-preload-584179) Creating domain...
	I1210 01:08:54.875474  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.875880  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has current primary IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.875902  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Found IP for machine: 192.168.39.193
	I1210 01:08:54.875918  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Reserving static IP address...
	I1210 01:08:54.876379  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-901295", mac: "52:54:00:f7:2f:3d", ip: "192.168.39.193"} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:54.876411  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Reserved static IP address: 192.168.39.193
	I1210 01:08:54.876434  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | skip adding static IP to network mk-default-k8s-diff-port-901295 - found existing host DHCP lease matching {name: "default-k8s-diff-port-901295", mac: "52:54:00:f7:2f:3d", ip: "192.168.39.193"}
	I1210 01:08:54.876456  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Getting to WaitForSSH function...
	I1210 01:08:54.876473  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for SSH to be available...
	I1210 01:08:54.878454  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.878758  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:54.878787  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.878940  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Using SSH client type: external
	I1210 01:08:54.878969  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa (-rw-------)
	I1210 01:08:54.878993  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.193 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:08:54.879003  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | About to run SSH command:
	I1210 01:08:54.879011  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | exit 0
	I1210 01:08:55.006046  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | SSH cmd err, output: <nil>: 
	I1210 01:08:55.006394  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetConfigRaw
	I1210 01:08:55.007100  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:55.009429  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.009753  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.009803  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.010054  133282 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/config.json ...
	I1210 01:08:55.010278  133282 machine.go:93] provisionDockerMachine start ...
	I1210 01:08:55.010302  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:55.010513  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.012899  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.013198  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.013248  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.013340  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.013509  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.013643  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.013726  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.013879  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.014070  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.014081  133282 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:08:55.126262  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:08:55.126294  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetMachineName
	I1210 01:08:55.126547  133282 buildroot.go:166] provisioning hostname "default-k8s-diff-port-901295"
	I1210 01:08:55.126592  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetMachineName
	I1210 01:08:55.126756  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.129397  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.129769  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.129798  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.129921  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.130071  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.130187  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.130279  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.130380  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.130545  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.130572  133282 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-901295 && echo "default-k8s-diff-port-901295" | sudo tee /etc/hostname
	I1210 01:08:55.256829  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-901295
	
	I1210 01:08:55.256857  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.259599  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.259977  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.260006  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.260257  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.260456  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.260645  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.260795  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.260996  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.261212  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.261239  133282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-901295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-901295/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-901295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:08:55.387808  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:08:55.387837  133282 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:08:55.387872  133282 buildroot.go:174] setting up certificates
	I1210 01:08:55.387883  133282 provision.go:84] configureAuth start
	I1210 01:08:55.387897  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetMachineName
	I1210 01:08:55.388193  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:55.391297  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.391649  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.391683  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.391799  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.393859  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.394150  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.394176  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.394272  133282 provision.go:143] copyHostCerts
	I1210 01:08:55.394336  133282 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:08:55.394353  133282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:08:55.394411  133282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:08:55.394501  133282 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:08:55.394508  133282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:08:55.394530  133282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:08:55.394615  133282 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:08:55.394624  133282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:08:55.394643  133282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:08:55.394693  133282 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-901295 san=[127.0.0.1 192.168.39.193 default-k8s-diff-port-901295 localhost minikube]
	I1210 01:08:55.502253  133282 provision.go:177] copyRemoteCerts
	I1210 01:08:55.502313  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:08:55.502341  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.504919  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.505216  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.505252  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.505425  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.505613  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.505749  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.505932  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:55.593242  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:08:55.616378  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 01:08:55.638786  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 01:08:55.660268  133282 provision.go:87] duration metric: took 272.369019ms to configureAuth
	I1210 01:08:55.660293  133282 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:08:55.660506  133282 config.go:182] Loaded profile config "default-k8s-diff-port-901295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:08:55.660597  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.662964  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.663283  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.663312  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.663461  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.663656  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.663820  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.663944  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.664091  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.664330  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.664354  133282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:08:55.918356  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:08:55.918389  133282 machine.go:96] duration metric: took 908.095325ms to provisionDockerMachine
	I1210 01:08:55.918402  133282 start.go:293] postStartSetup for "default-k8s-diff-port-901295" (driver="kvm2")
	I1210 01:08:55.918415  133282 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:08:55.918450  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:55.918790  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:08:55.918823  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.921575  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.921897  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.921929  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.922026  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.922205  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.922375  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.922485  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:56.008442  133282 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:08:56.012149  133282 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:08:56.012165  133282 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:08:56.012239  133282 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:08:56.012325  133282 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:08:56.012428  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:08:56.021144  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:56.042869  133282 start.go:296] duration metric: took 124.452091ms for postStartSetup
	I1210 01:08:56.042914  133282 fix.go:56] duration metric: took 19.276278483s for fixHost
	I1210 01:08:56.042940  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:56.045280  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.045612  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.045644  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.045845  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:56.046002  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.046123  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.046224  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:56.046353  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:56.046530  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:56.046541  133282 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:08:56.158690  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792936.125620375
	
	I1210 01:08:56.158714  133282 fix.go:216] guest clock: 1733792936.125620375
	I1210 01:08:56.158722  133282 fix.go:229] Guest: 2024-12-10 01:08:56.125620375 +0000 UTC Remote: 2024-12-10 01:08:56.042918319 +0000 UTC m=+253.475376365 (delta=82.702056ms)
	I1210 01:08:56.158741  133282 fix.go:200] guest clock delta is within tolerance: 82.702056ms
	I1210 01:08:56.158746  133282 start.go:83] releasing machines lock for "default-k8s-diff-port-901295", held for 19.392149024s
	I1210 01:08:56.158769  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.159017  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:56.161998  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.162321  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.162350  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.162541  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.163022  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.163197  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.163296  133282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:08:56.163346  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:56.163449  133282 ssh_runner.go:195] Run: cat /version.json
	I1210 01:08:56.163481  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:56.166071  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.166443  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.166475  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.166500  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.166750  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:56.166897  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.166920  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.166929  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.167083  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:56.167089  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:56.167255  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.167258  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:56.167400  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:56.167529  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:56.273144  133282 ssh_runner.go:195] Run: systemctl --version
	I1210 01:08:56.278678  133282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:08:56.423921  133282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:08:56.429467  133282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:08:56.429537  133282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:08:56.443900  133282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:08:56.443927  133282 start.go:495] detecting cgroup driver to use...
	I1210 01:08:56.443996  133282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:08:56.458653  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:08:56.471717  133282 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:08:56.471798  133282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:08:56.483960  133282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:08:56.495903  133282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:08:56.604493  133282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:08:56.741771  133282 docker.go:233] disabling docker service ...
	I1210 01:08:56.741846  133282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:08:56.755264  133282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:08:56.767590  133282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:08:56.922151  133282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:08:57.045410  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:08:57.061217  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:08:57.079488  133282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 01:08:57.079552  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.090356  133282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:08:57.090434  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.100784  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.111326  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.120417  133282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:08:57.129871  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.140489  133282 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.157524  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.167947  133282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:08:57.176904  133282 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:08:57.176947  133282 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:08:57.188925  133282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:08:57.197558  133282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:57.319427  133282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:08:57.419493  133282 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:08:57.419570  133282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:08:57.424302  133282 start.go:563] Will wait 60s for crictl version
	I1210 01:08:57.424362  133282 ssh_runner.go:195] Run: which crictl
	I1210 01:08:57.428067  133282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:08:57.468247  133282 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:08:57.468319  133282 ssh_runner.go:195] Run: crio --version
	I1210 01:08:57.497834  133282 ssh_runner.go:195] Run: crio --version
	I1210 01:08:57.527032  133282 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 01:08:57.528284  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:57.531510  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:57.531882  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:57.531908  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:57.532178  133282 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 01:08:57.536149  133282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:57.548081  133282 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:08:57.548221  133282 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:08:57.548283  133282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:57.585539  133282 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 01:08:57.585619  133282 ssh_runner.go:195] Run: which lz4
	I1210 01:08:57.590131  133282 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 01:08:57.595506  133282 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 01:08:57.595534  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1210 01:08:54.760444  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:55.259774  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:55.759929  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:56.260379  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:56.759985  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:57.260495  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:57.759699  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:58.260475  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:58.759732  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:59.260424  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:57.291502  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:59.792026  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:01.793182  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:57.453911  132605 main.go:141] libmachine: (no-preload-584179) Waiting to get IP...
	I1210 01:08:57.455000  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:57.455393  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:57.455472  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:57.455384  134419 retry.go:31] will retry after 189.932045ms: waiting for machine to come up
	I1210 01:08:57.646978  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:57.647486  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:57.647520  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:57.647418  134419 retry.go:31] will retry after 278.873511ms: waiting for machine to come up
	I1210 01:08:57.928222  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:57.928797  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:57.928837  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:57.928738  134419 retry.go:31] will retry after 468.940412ms: waiting for machine to come up
	I1210 01:08:58.399469  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:58.400105  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:58.400131  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:58.400041  134419 retry.go:31] will retry after 459.796386ms: waiting for machine to come up
	I1210 01:08:58.861581  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:58.862042  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:58.862075  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:58.861985  134419 retry.go:31] will retry after 493.349488ms: waiting for machine to come up
	I1210 01:08:59.356810  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:59.357338  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:59.357365  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:59.357314  134419 retry.go:31] will retry after 736.790492ms: waiting for machine to come up
	I1210 01:09:00.095779  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:00.096246  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:00.096281  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:00.096182  134419 retry.go:31] will retry after 1.059095907s: waiting for machine to come up
	I1210 01:09:01.157286  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:01.157718  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:01.157759  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:01.157656  134419 retry.go:31] will retry after 1.18137171s: waiting for machine to come up
	I1210 01:08:58.835009  133282 crio.go:462] duration metric: took 1.24490918s to copy over tarball
	I1210 01:08:58.835108  133282 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 01:09:00.985062  133282 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.149905713s)
	I1210 01:09:00.985097  133282 crio.go:469] duration metric: took 2.150055868s to extract the tarball
	I1210 01:09:00.985108  133282 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 01:09:01.032869  133282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:09:01.074578  133282 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 01:09:01.074609  133282 cache_images.go:84] Images are preloaded, skipping loading
	I1210 01:09:01.074618  133282 kubeadm.go:934] updating node { 192.168.39.193 8444 v1.31.2 crio true true} ...
	I1210 01:09:01.074727  133282 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-901295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:09:01.074794  133282 ssh_runner.go:195] Run: crio config
	I1210 01:09:01.133905  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:09:01.133943  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:01.133965  133282 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:09:01.133999  133282 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.193 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-901295 NodeName:default-k8s-diff-port-901295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 01:09:01.134201  133282 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.193
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-901295"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.193"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.193"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:09:01.134264  133282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 01:09:01.147844  133282 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:09:01.147931  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:09:01.160432  133282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 01:09:01.180526  133282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:09:01.200698  133282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1210 01:09:01.216799  133282 ssh_runner.go:195] Run: grep 192.168.39.193	control-plane.minikube.internal$ /etc/hosts
	I1210 01:09:01.220381  133282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.193	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:09:01.233079  133282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:01.361483  133282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:09:01.380679  133282 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295 for IP: 192.168.39.193
	I1210 01:09:01.380702  133282 certs.go:194] generating shared ca certs ...
	I1210 01:09:01.380722  133282 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:01.380921  133282 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:09:01.380994  133282 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:09:01.381010  133282 certs.go:256] generating profile certs ...
	I1210 01:09:01.381136  133282 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/client.key
	I1210 01:09:01.381229  133282 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/apiserver.key.b900309b
	I1210 01:09:01.381286  133282 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/proxy-client.key
	I1210 01:09:01.381437  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:09:01.381489  133282 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:09:01.381500  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:09:01.381537  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:09:01.381568  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:09:01.381598  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:09:01.381658  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:09:01.382643  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:09:01.437062  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:09:01.472383  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:09:01.503832  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:09:01.532159  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 01:09:01.555926  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 01:09:01.578213  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:09:01.599047  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 01:09:01.620628  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:09:01.643326  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:09:01.665846  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:09:01.688854  133282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:09:01.706519  133282 ssh_runner.go:195] Run: openssl version
	I1210 01:09:01.712053  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:09:01.722297  133282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:09:01.726404  133282 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:09:01.726491  133282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:09:01.731901  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:09:01.745040  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:09:01.758663  133282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:01.763894  133282 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:01.763945  133282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:01.771019  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:09:01.781071  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:09:01.790898  133282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:09:01.795494  133282 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:09:01.795557  133282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:09:01.800996  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:09:01.811221  133282 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:09:01.815412  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:09:01.821621  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:09:01.829028  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:09:01.838361  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:09:01.844663  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:09:01.850154  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:09:01.855539  133282 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:09:01.855625  133282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:09:01.855663  133282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:01.898021  133282 cri.go:89] found id: ""
	I1210 01:09:01.898095  133282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:09:01.908929  133282 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:09:01.908947  133282 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:09:01.909005  133282 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:09:01.917830  133282 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:09:01.918982  133282 kubeconfig.go:125] found "default-k8s-diff-port-901295" server: "https://192.168.39.193:8444"
	I1210 01:09:01.921394  133282 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:09:01.930263  133282 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.193
	I1210 01:09:01.930291  133282 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:09:01.930304  133282 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:09:01.930352  133282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:01.966094  133282 cri.go:89] found id: ""
	I1210 01:09:01.966195  133282 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:09:01.983212  133282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:09:01.991944  133282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:09:01.991963  133282 kubeadm.go:157] found existing configuration files:
	
	I1210 01:09:01.992011  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 01:09:02.000043  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:09:02.000094  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:09:02.008538  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 01:09:02.016658  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:09:02.016718  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:09:02.025191  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 01:09:02.033198  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:09:02.033235  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:09:02.041713  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 01:09:02.049752  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:09:02.049801  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:09:02.058162  133282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:09:02.067001  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:02.178210  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:59.760246  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:00.260582  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:00.760701  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:01.259686  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:01.759889  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:02.260232  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:02.759769  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:03.259935  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:03.760670  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.260443  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.289731  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:06.291608  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:02.340685  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:02.341201  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:02.341233  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:02.341148  134419 retry.go:31] will retry after 1.149002375s: waiting for machine to come up
	I1210 01:09:03.491439  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:03.491777  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:03.491803  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:03.491742  134419 retry.go:31] will retry after 2.260301884s: waiting for machine to come up
	I1210 01:09:05.753701  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:05.754207  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:05.754245  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:05.754151  134419 retry.go:31] will retry after 2.19021466s: waiting for machine to come up
	I1210 01:09:03.022068  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:03.230465  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:03.288423  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:03.380544  133282 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:09:03.380653  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:03.881388  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.381638  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.881652  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.380981  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.394784  133282 api_server.go:72] duration metric: took 2.014238708s to wait for apiserver process to appear ...
	I1210 01:09:05.394817  133282 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:09:05.394854  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:07.865790  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:07.865818  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:07.865831  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:07.881775  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:07.881807  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:07.894896  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:07.914874  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:07.914905  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:08.395143  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:08.404338  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:08.404370  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:08.895743  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:08.906401  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:08.906439  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:09.394905  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:09.400326  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 200:
	ok
	I1210 01:09:09.411040  133282 api_server.go:141] control plane version: v1.31.2
	I1210 01:09:09.411080  133282 api_server.go:131] duration metric: took 4.016246339s to wait for apiserver health ...
	I1210 01:09:09.411090  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:09:09.411096  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:09.412738  133282 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:09:04.760421  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.260154  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.760313  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:06.259902  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:06.760365  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:07.260060  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:07.759720  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:08.260052  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:08.759734  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:09.260736  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:08.291848  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:10.790539  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:07.946992  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:07.947528  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:07.947561  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:07.947474  134419 retry.go:31] will retry after 3.212306699s: waiting for machine to come up
	I1210 01:09:11.163716  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:11.164132  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:11.164163  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:11.164092  134419 retry.go:31] will retry after 3.275164589s: waiting for machine to come up
	I1210 01:09:09.413907  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:09:09.423631  133282 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:09:09.440030  133282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:09:09.449054  133282 system_pods.go:59] 8 kube-system pods found
	I1210 01:09:09.449081  133282 system_pods.go:61] "coredns-7c65d6cfc9-qbdpj" [eec04b43-145a-4cae-9085-185b573be507] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 01:09:09.449088  133282 system_pods.go:61] "etcd-default-k8s-diff-port-901295" [c8c570b0-2e66-4cf5-bed6-20ee655ad679] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 01:09:09.449100  133282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-901295" [42b2ad48-8b92-4ba4-8a14-6c3e6bdec4a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 01:09:09.449116  133282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-901295" [bd2c0e9d-cb31-46a5-b12e-ab70ed05c8e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 01:09:09.449127  133282 system_pods.go:61] "kube-proxy-5szz9" [957bab4d-6329-41b4-9980-aaa17133201e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 01:09:09.449135  133282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-901295" [1729b062-1bfe-447f-b9ed-29813c7f056a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 01:09:09.449144  133282 system_pods.go:61] "metrics-server-6867b74b74-zpj2g" [cdfb5b8e-5b7f-4fc8-8ad8-07ea92f7f737] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:09:09.449150  133282 system_pods.go:61] "storage-provisioner" [342f814b-f510-4a3b-b27d-52ebbdf85275] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 01:09:09.449159  133282 system_pods.go:74] duration metric: took 9.110007ms to wait for pod list to return data ...
	I1210 01:09:09.449168  133282 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:09:09.452778  133282 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:09:09.452806  133282 node_conditions.go:123] node cpu capacity is 2
	I1210 01:09:09.452818  133282 node_conditions.go:105] duration metric: took 3.643268ms to run NodePressure ...
	I1210 01:09:09.452837  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:09.728171  133282 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 01:09:09.732074  133282 kubeadm.go:739] kubelet initialised
	I1210 01:09:09.732096  133282 kubeadm.go:740] duration metric: took 3.900542ms waiting for restarted kubelet to initialise ...
	I1210 01:09:09.732106  133282 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:09.736406  133282 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.740516  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.740534  133282 pod_ready.go:82] duration metric: took 4.104848ms for pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.740543  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.740549  133282 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.744293  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.744311  133282 pod_ready.go:82] duration metric: took 3.755781ms for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.744321  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.744326  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.748023  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.748045  133282 pod_ready.go:82] duration metric: took 3.712559ms for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.748062  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.748070  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.843581  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.843607  133282 pod_ready.go:82] duration metric: took 95.52817ms for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.843621  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.843632  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5szz9" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:10.242986  133282 pod_ready.go:93] pod "kube-proxy-5szz9" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:10.243015  133282 pod_ready.go:82] duration metric: took 399.37468ms for pod "kube-proxy-5szz9" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:10.243025  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:12.249815  133282 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:09.760075  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:10.259970  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:10.760547  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:11.259999  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:11.760315  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:12.260121  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:12.760217  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:13.259996  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:13.760635  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:14.259738  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:13.290686  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:15.792057  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:14.440802  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.441315  132605 main.go:141] libmachine: (no-preload-584179) Found IP for machine: 192.168.50.169
	I1210 01:09:14.441338  132605 main.go:141] libmachine: (no-preload-584179) Reserving static IP address...
	I1210 01:09:14.441355  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has current primary IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.441776  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "no-preload-584179", mac: "52:54:00:94:5e:a7", ip: "192.168.50.169"} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.441830  132605 main.go:141] libmachine: (no-preload-584179) DBG | skip adding static IP to network mk-no-preload-584179 - found existing host DHCP lease matching {name: "no-preload-584179", mac: "52:54:00:94:5e:a7", ip: "192.168.50.169"}
	I1210 01:09:14.441847  132605 main.go:141] libmachine: (no-preload-584179) Reserved static IP address: 192.168.50.169
	I1210 01:09:14.441867  132605 main.go:141] libmachine: (no-preload-584179) Waiting for SSH to be available...
	I1210 01:09:14.441882  132605 main.go:141] libmachine: (no-preload-584179) DBG | Getting to WaitForSSH function...
	I1210 01:09:14.444063  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.444360  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.444397  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.444510  132605 main.go:141] libmachine: (no-preload-584179) DBG | Using SSH client type: external
	I1210 01:09:14.444531  132605 main.go:141] libmachine: (no-preload-584179) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa (-rw-------)
	I1210 01:09:14.444565  132605 main.go:141] libmachine: (no-preload-584179) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:09:14.444579  132605 main.go:141] libmachine: (no-preload-584179) DBG | About to run SSH command:
	I1210 01:09:14.444594  132605 main.go:141] libmachine: (no-preload-584179) DBG | exit 0
	I1210 01:09:14.571597  132605 main.go:141] libmachine: (no-preload-584179) DBG | SSH cmd err, output: <nil>: 
	I1210 01:09:14.571997  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetConfigRaw
	I1210 01:09:14.572831  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:14.576075  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.576525  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.576559  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.576843  132605 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/config.json ...
	I1210 01:09:14.577023  132605 machine.go:93] provisionDockerMachine start ...
	I1210 01:09:14.577043  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:14.577263  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.579535  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.579894  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.579925  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.580191  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:14.580426  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.580579  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.580742  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:14.580901  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:14.581081  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:14.581092  132605 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:09:14.699453  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:09:14.699485  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:09:14.699734  132605 buildroot.go:166] provisioning hostname "no-preload-584179"
	I1210 01:09:14.699766  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:09:14.700011  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.703169  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.703570  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.703597  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.703742  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:14.703967  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.704170  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.704395  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:14.704582  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:14.704802  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:14.704825  132605 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-584179 && echo "no-preload-584179" | sudo tee /etc/hostname
	I1210 01:09:14.836216  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-584179
	
	I1210 01:09:14.836259  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.839077  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.839502  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.839536  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.839752  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:14.839958  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.840127  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.840304  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:14.840534  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:14.840766  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:14.840793  132605 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-584179' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-584179/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-584179' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:09:14.965138  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:09:14.965175  132605 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:09:14.965246  132605 buildroot.go:174] setting up certificates
	I1210 01:09:14.965268  132605 provision.go:84] configureAuth start
	I1210 01:09:14.965287  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:09:14.965570  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:14.968666  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.969081  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.969116  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.969264  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.971772  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.972144  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.972169  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.972337  132605 provision.go:143] copyHostCerts
	I1210 01:09:14.972403  132605 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:09:14.972428  132605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:09:14.972492  132605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:09:14.972648  132605 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:09:14.972663  132605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:09:14.972698  132605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:09:14.972790  132605 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:09:14.972803  132605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:09:14.972836  132605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:09:14.972915  132605 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.no-preload-584179 san=[127.0.0.1 192.168.50.169 localhost minikube no-preload-584179]
	I1210 01:09:15.113000  132605 provision.go:177] copyRemoteCerts
	I1210 01:09:15.113067  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:09:15.113100  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.115838  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.116216  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.116243  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.116422  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.116590  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.116726  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.116820  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.199896  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:09:15.225440  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 01:09:15.250028  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 01:09:15.274086  132605 provision.go:87] duration metric: took 308.801497ms to configureAuth
	I1210 01:09:15.274127  132605 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:09:15.274298  132605 config.go:182] Loaded profile config "no-preload-584179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:09:15.274390  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.277149  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.277509  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.277539  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.277682  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.277842  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.277999  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.278110  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.278260  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:15.278438  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:15.278454  132605 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:09:15.504997  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:09:15.505080  132605 machine.go:96] duration metric: took 928.040946ms to provisionDockerMachine
	I1210 01:09:15.505103  132605 start.go:293] postStartSetup for "no-preload-584179" (driver="kvm2")
	I1210 01:09:15.505118  132605 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:09:15.505150  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.505498  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:09:15.505532  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.508802  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.509247  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.509324  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.509448  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.509674  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.509840  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.509985  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.597115  132605 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:09:15.602107  132605 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:09:15.602135  132605 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:09:15.602226  132605 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:09:15.602330  132605 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:09:15.602453  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:09:15.611320  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:09:15.633173  132605 start.go:296] duration metric: took 128.055577ms for postStartSetup
	I1210 01:09:15.633214  132605 fix.go:56] duration metric: took 19.474291224s for fixHost
	I1210 01:09:15.633234  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.635888  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.636254  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.636298  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.636472  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.636655  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.636827  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.636941  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.637115  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:15.637284  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:15.637295  132605 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:09:15.746834  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792955.705138377
	
	I1210 01:09:15.746862  132605 fix.go:216] guest clock: 1733792955.705138377
	I1210 01:09:15.746873  132605 fix.go:229] Guest: 2024-12-10 01:09:15.705138377 +0000 UTC Remote: 2024-12-10 01:09:15.6332178 +0000 UTC m=+353.450037611 (delta=71.920577ms)
	I1210 01:09:15.746899  132605 fix.go:200] guest clock delta is within tolerance: 71.920577ms
	I1210 01:09:15.746915  132605 start.go:83] releasing machines lock for "no-preload-584179", held for 19.588029336s
	I1210 01:09:15.746945  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.747285  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:15.750451  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.750900  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.750929  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.751162  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.751698  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.751882  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.751964  132605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:09:15.752035  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.752082  132605 ssh_runner.go:195] Run: cat /version.json
	I1210 01:09:15.752104  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.754825  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755065  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755249  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.755269  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755457  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.755549  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.755585  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755624  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.755718  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.755807  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.755929  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.755997  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.756266  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.756431  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.834820  132605 ssh_runner.go:195] Run: systemctl --version
	I1210 01:09:15.859263  132605 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:09:16.006149  132605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:09:16.012040  132605 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:09:16.012116  132605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:09:16.026410  132605 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:09:16.026435  132605 start.go:495] detecting cgroup driver to use...
	I1210 01:09:16.026508  132605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:09:16.040833  132605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:09:16.053355  132605 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:09:16.053404  132605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:09:16.066169  132605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:09:16.078906  132605 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:09:16.183645  132605 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:09:16.338131  132605 docker.go:233] disabling docker service ...
	I1210 01:09:16.338210  132605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:09:16.353706  132605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:09:16.367025  132605 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:09:16.490857  132605 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:09:16.599213  132605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:09:16.612423  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:09:16.628989  132605 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 01:09:16.629051  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.638381  132605 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:09:16.638443  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.648140  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.657702  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.667303  132605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:09:16.677058  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.686261  132605 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.701267  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.710630  132605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:09:16.719338  132605 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:09:16.719399  132605 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:09:16.730675  132605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:09:16.739704  132605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:16.855267  132605 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:09:16.945551  132605 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:09:16.945636  132605 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:09:16.950041  132605 start.go:563] Will wait 60s for crictl version
	I1210 01:09:16.950089  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:16.953415  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:09:16.986363  132605 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:09:16.986452  132605 ssh_runner.go:195] Run: crio --version
	I1210 01:09:17.013313  132605 ssh_runner.go:195] Run: crio --version
	I1210 01:09:17.040732  132605 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 01:09:17.042078  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:17.044697  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:17.044992  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:17.045017  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:17.045180  132605 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1210 01:09:17.048776  132605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:09:17.059862  132605 kubeadm.go:883] updating cluster {Name:no-preload-584179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-584179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:09:17.059969  132605 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:09:17.060002  132605 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:09:17.092954  132605 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 01:09:17.092981  132605 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 01:09:17.093021  132605 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:17.093063  132605 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.093076  132605 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.093096  132605 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1210 01:09:17.093157  132605 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.093084  132605 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.093235  132605 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.093250  132605 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.094742  132605 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1210 01:09:17.094787  132605 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.094804  132605 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.094742  132605 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.094810  132605 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:17.094753  132605 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.094820  132605 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.094765  132605 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:14.765671  133282 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:15.750454  133282 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:15.750473  133282 pod_ready.go:82] duration metric: took 5.507439947s for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:15.750486  133282 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:14.759976  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:15.259717  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:15.760410  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:16.260034  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:16.759708  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:17.260433  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:17.760687  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:18.260284  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:18.760557  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:19.260362  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:18.290233  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:20.291198  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:17.246846  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.248658  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.250095  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.254067  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.256089  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.278344  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1210 01:09:17.278473  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.369439  132605 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1210 01:09:17.369501  132605 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.369501  132605 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1210 01:09:17.369540  132605 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.369553  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.369604  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.417953  132605 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1210 01:09:17.418006  132605 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.418052  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.423233  132605 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1210 01:09:17.423274  132605 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1210 01:09:17.423281  132605 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.423306  132605 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.423326  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.423352  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.423429  132605 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1210 01:09:17.423469  132605 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.423503  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.505918  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.505973  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.505933  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.506033  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.506057  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.506093  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.622808  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.635839  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.637443  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.637478  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.637587  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.637611  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.688747  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.768097  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.768175  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.768211  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.768320  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.768313  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.805141  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1210 01:09:17.805252  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1210 01:09:17.885468  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1210 01:09:17.885628  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1210 01:09:17.893263  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1210 01:09:17.893312  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1210 01:09:17.893335  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1210 01:09:17.893381  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 01:09:17.893399  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1210 01:09:17.893411  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 01:09:17.893417  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 01:09:17.893464  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1210 01:09:17.893479  132605 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1210 01:09:17.893454  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 01:09:17.893518  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1210 01:09:17.895148  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1210 01:09:18.009923  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:21.497870  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.604325674s)
	I1210 01:09:21.497908  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1210 01:09:21.497931  132605 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1210 01:09:21.497925  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (3.604515411s)
	I1210 01:09:21.497964  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2: (3.604523853s)
	I1210 01:09:21.497980  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1210 01:09:21.497988  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1210 01:09:21.497968  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1210 01:09:21.498030  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (3.604504871s)
	I1210 01:09:21.498065  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1210 01:09:21.498092  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (3.604626001s)
	I1210 01:09:21.498135  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1210 01:09:21.498137  132605 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.48818734s)
	I1210 01:09:21.498180  132605 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 01:09:21.498210  132605 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:21.498262  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.758044  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:20.257446  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:19.759901  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:20.260224  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:20.760523  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:21.259846  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:21.759997  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:22.259939  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:22.760414  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:23.260359  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:23.760075  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:24.260519  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:22.291428  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:24.291612  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:26.791400  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:23.369885  132605 ssh_runner.go:235] Completed: which crictl: (1.871582184s)
	I1210 01:09:23.369947  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.871938064s)
	I1210 01:09:23.369967  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1210 01:09:23.369976  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:23.370000  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 01:09:23.370042  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 01:09:25.661942  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.291860829s)
	I1210 01:09:25.661984  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1210 01:09:25.661990  132605 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.291995779s)
	I1210 01:09:25.662011  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 01:09:25.662066  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 01:09:25.662066  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:27.025354  132605 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.36318975s)
	I1210 01:09:27.025446  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:27.025517  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.363423006s)
	I1210 01:09:27.025546  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1210 01:09:27.025604  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 01:09:27.025677  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 01:09:27.063571  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 01:09:27.063700  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 01:09:22.757215  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:24.757584  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:27.256535  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:24.760537  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:25.259994  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:25.760205  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:26.260504  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:26.759648  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:27.259995  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:27.760383  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:28.259992  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:28.760004  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:29.260496  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:28.813963  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:30.837175  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:29.106253  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.080542846s)
	I1210 01:09:29.106295  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1210 01:09:29.106312  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.042586527s)
	I1210 01:09:29.106326  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 01:09:29.106345  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1210 01:09:29.106392  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 01:09:30.968622  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.862203504s)
	I1210 01:09:30.968650  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1210 01:09:30.968679  132605 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 01:09:30.968732  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 01:09:31.612519  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 01:09:31.612559  132605 cache_images.go:123] Successfully loaded all cached images
	I1210 01:09:31.612564  132605 cache_images.go:92] duration metric: took 14.519573158s to LoadCachedImages
	I1210 01:09:31.612577  132605 kubeadm.go:934] updating node { 192.168.50.169 8443 v1.31.2 crio true true} ...
	I1210 01:09:31.612686  132605 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-584179 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-584179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:09:31.612750  132605 ssh_runner.go:195] Run: crio config
	I1210 01:09:31.661155  132605 cni.go:84] Creating CNI manager for ""
	I1210 01:09:31.661185  132605 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:31.661199  132605 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:09:31.661228  132605 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.169 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-584179 NodeName:no-preload-584179 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.169"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.169 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 01:09:31.661406  132605 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.169
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-584179"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.169"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.169"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:09:31.661511  132605 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 01:09:31.671185  132605 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:09:31.671259  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:09:31.679776  132605 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1210 01:09:31.694290  132605 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:09:31.708644  132605 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1210 01:09:31.725292  132605 ssh_runner.go:195] Run: grep 192.168.50.169	control-plane.minikube.internal$ /etc/hosts
	I1210 01:09:31.729070  132605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.169	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:09:31.740077  132605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:31.857074  132605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:09:31.872257  132605 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179 for IP: 192.168.50.169
	I1210 01:09:31.872280  132605 certs.go:194] generating shared ca certs ...
	I1210 01:09:31.872314  132605 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:31.872515  132605 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:09:31.872569  132605 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:09:31.872579  132605 certs.go:256] generating profile certs ...
	I1210 01:09:31.872694  132605 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/client.key
	I1210 01:09:31.872775  132605 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/apiserver.key.0a939830
	I1210 01:09:31.872828  132605 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/proxy-client.key
	I1210 01:09:31.872979  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:09:31.873020  132605 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:09:31.873034  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:09:31.873069  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:09:31.873098  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:09:31.873127  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:09:31.873188  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:09:31.874099  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:09:31.906792  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:09:31.939994  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:09:31.965628  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:09:31.992020  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 01:09:32.015601  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 01:09:32.048113  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:09:32.069416  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 01:09:32.090144  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:09:32.111484  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:09:32.135390  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:09:32.157978  132605 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:09:32.173851  132605 ssh_runner.go:195] Run: openssl version
	I1210 01:09:32.179068  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:09:32.188602  132605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:09:32.192585  132605 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:09:32.192629  132605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:09:32.197637  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:09:32.207401  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:09:32.216700  132605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:29.756368  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:31.756948  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:29.760244  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:30.260534  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:30.760426  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:31.259767  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:31.759951  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:32.259919  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:32.760161  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:33.260272  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:33.759885  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:34.259970  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:33.290818  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:35.790889  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:32.220620  132605 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:32.220663  132605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:32.225661  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:09:32.235325  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:09:32.244746  132605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:09:32.248733  132605 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:09:32.248774  132605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:09:32.254022  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:09:32.264208  132605 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:09:32.268332  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:09:32.273902  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:09:32.279525  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:09:32.284958  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:09:32.291412  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:09:32.296527  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:09:32.302123  132605 kubeadm.go:392] StartCluster: {Name:no-preload-584179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-584179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:09:32.302233  132605 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:09:32.302293  132605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:32.345135  132605 cri.go:89] found id: ""
	I1210 01:09:32.345212  132605 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:09:32.355077  132605 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:09:32.355093  132605 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:09:32.355131  132605 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:09:32.364021  132605 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:09:32.365012  132605 kubeconfig.go:125] found "no-preload-584179" server: "https://192.168.50.169:8443"
	I1210 01:09:32.367348  132605 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:09:32.375938  132605 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.169
	I1210 01:09:32.375967  132605 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:09:32.375979  132605 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:09:32.376032  132605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:32.408948  132605 cri.go:89] found id: ""
	I1210 01:09:32.409014  132605 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:09:32.427628  132605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:09:32.437321  132605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:09:32.437348  132605 kubeadm.go:157] found existing configuration files:
	
	I1210 01:09:32.437391  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:09:32.446114  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:09:32.446155  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:09:32.455531  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:09:32.465558  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:09:32.465611  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:09:32.475265  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:09:32.483703  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:09:32.483750  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:09:32.492041  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:09:32.499895  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:09:32.499948  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:09:32.508205  132605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:09:32.516625  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:32.628252  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:33.675979  132605 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.04768244s)
	I1210 01:09:33.676029  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:33.873465  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:33.951722  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:34.064512  132605 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:09:34.064627  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:34.565753  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.065163  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.104915  132605 api_server.go:72] duration metric: took 1.040405424s to wait for apiserver process to appear ...
	I1210 01:09:35.104944  132605 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:09:35.104970  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:35.105426  132605 api_server.go:269] stopped: https://192.168.50.169:8443/healthz: Get "https://192.168.50.169:8443/healthz": dial tcp 192.168.50.169:8443: connect: connection refused
	I1210 01:09:35.606063  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:34.256982  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:36.756184  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:38.326687  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:38.326719  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:38.326736  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:38.400207  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:38.400236  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:38.605572  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:38.610811  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:38.610849  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:39.105424  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:39.117268  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:39.117303  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:39.605417  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:39.614444  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 200:
	ok
	I1210 01:09:39.620993  132605 api_server.go:141] control plane version: v1.31.2
	I1210 01:09:39.621020  132605 api_server.go:131] duration metric: took 4.51606815s to wait for apiserver health ...
	I1210 01:09:39.621032  132605 cni.go:84] Creating CNI manager for ""
	I1210 01:09:39.621041  132605 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:34.759835  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.260276  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.759791  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:36.259684  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:36.760649  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:37.259922  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:37.760558  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:38.260712  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:38.759679  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:39.259678  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:39.622539  132605 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:09:39.623685  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:09:39.643844  132605 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:09:39.678622  132605 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:09:39.692082  132605 system_pods.go:59] 8 kube-system pods found
	I1210 01:09:39.692124  132605 system_pods.go:61] "coredns-7c65d6cfc9-hhsm5" [dddb227a-7c16-4acd-be5f-1ab38b78129c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 01:09:39.692133  132605 system_pods.go:61] "etcd-no-preload-584179" [acba2a13-196f-4ff9-8151-6b88578d532d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 01:09:39.692141  132605 system_pods.go:61] "kube-apiserver-no-preload-584179" [20a67076-4ff2-4f31-b245-bf4079cd11d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 01:09:39.692149  132605 system_pods.go:61] "kube-controller-manager-no-preload-584179" [d7a2e531-1609-4c8a-b756-d9819339ed27] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 01:09:39.692154  132605 system_pods.go:61] "kube-proxy-xcjs2" [ec6cf5b1-3ea9-4868-874d-61e262cca0c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 01:09:39.692162  132605 system_pods.go:61] "kube-scheduler-no-preload-584179" [998543da-3056-4960-8762-9ab3dbd1925a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 01:09:39.692174  132605 system_pods.go:61] "metrics-server-6867b74b74-lwgxd" [0e7f1063-8508-4f5b-b8ff-bbd387a53919] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:09:39.692183  132605 system_pods.go:61] "storage-provisioner" [31180637-f48e-4dda-8ec3-56155bb300cf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 01:09:39.692200  132605 system_pods.go:74] duration metric: took 13.548523ms to wait for pod list to return data ...
	I1210 01:09:39.692214  132605 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:09:39.696707  132605 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:09:39.696740  132605 node_conditions.go:123] node cpu capacity is 2
	I1210 01:09:39.696754  132605 node_conditions.go:105] duration metric: took 4.534393ms to run NodePressure ...
	I1210 01:09:39.696781  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:39.977595  132605 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 01:09:39.981694  132605 kubeadm.go:739] kubelet initialised
	I1210 01:09:39.981714  132605 kubeadm.go:740] duration metric: took 4.094235ms waiting for restarted kubelet to initialise ...
	I1210 01:09:39.981724  132605 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:39.987484  132605 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:39.992414  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.992434  132605 pod_ready.go:82] duration metric: took 4.925954ms for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:39.992442  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.992448  132605 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:39.996262  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "etcd-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.996291  132605 pod_ready.go:82] duration metric: took 3.826925ms for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:39.996301  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "etcd-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.996309  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.000642  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-apiserver-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.000659  132605 pod_ready.go:82] duration metric: took 4.340955ms for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.000668  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-apiserver-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.000676  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.082165  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.082191  132605 pod_ready.go:82] duration metric: took 81.505218ms for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.082204  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.082214  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.483273  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-proxy-xcjs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.483306  132605 pod_ready.go:82] duration metric: took 401.082947ms for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.483318  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-proxy-xcjs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.483329  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.882587  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-scheduler-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.882617  132605 pod_ready.go:82] duration metric: took 399.278598ms for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.882629  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-scheduler-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.882641  132605 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:41.281474  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:41.281502  132605 pod_ready.go:82] duration metric: took 398.850415ms for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:41.281516  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:41.281526  132605 pod_ready.go:39] duration metric: took 1.299793175s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:41.281547  132605 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 01:09:41.293293  132605 ops.go:34] apiserver oom_adj: -16
	I1210 01:09:41.293310  132605 kubeadm.go:597] duration metric: took 8.938211553s to restartPrimaryControlPlane
	I1210 01:09:41.293318  132605 kubeadm.go:394] duration metric: took 8.991203373s to StartCluster
	I1210 01:09:41.293334  132605 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:41.293389  132605 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:09:41.295054  132605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:41.295293  132605 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 01:09:41.295376  132605 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 01:09:41.295496  132605 addons.go:69] Setting storage-provisioner=true in profile "no-preload-584179"
	I1210 01:09:41.295519  132605 addons.go:234] Setting addon storage-provisioner=true in "no-preload-584179"
	W1210 01:09:41.295529  132605 addons.go:243] addon storage-provisioner should already be in state true
	I1210 01:09:41.295527  132605 config.go:182] Loaded profile config "no-preload-584179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:09:41.295581  132605 host.go:66] Checking if "no-preload-584179" exists ...
	I1210 01:09:41.295588  132605 addons.go:69] Setting metrics-server=true in profile "no-preload-584179"
	I1210 01:09:41.295602  132605 addons.go:234] Setting addon metrics-server=true in "no-preload-584179"
	I1210 01:09:41.295604  132605 addons.go:69] Setting default-storageclass=true in profile "no-preload-584179"
	W1210 01:09:41.295615  132605 addons.go:243] addon metrics-server should already be in state true
	I1210 01:09:41.295627  132605 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-584179"
	I1210 01:09:41.295643  132605 host.go:66] Checking if "no-preload-584179" exists ...
	I1210 01:09:41.295906  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.295951  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.296035  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.296052  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.296089  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.296134  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.296994  132605 out.go:177] * Verifying Kubernetes components...
	I1210 01:09:41.298351  132605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:41.312841  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32935
	I1210 01:09:41.313326  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.313883  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.313906  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.314202  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.314798  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.314846  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.316718  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45347
	I1210 01:09:41.317263  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.317829  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.317857  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.318269  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.318870  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.318916  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.329929  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45969
	I1210 01:09:41.330341  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.330879  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.330894  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.331331  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.331505  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.332041  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43911
	I1210 01:09:41.332457  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.333084  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.333107  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.333516  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.333728  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.335268  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42073
	I1210 01:09:41.336123  132605 addons.go:234] Setting addon default-storageclass=true in "no-preload-584179"
	W1210 01:09:41.336137  132605 addons.go:243] addon default-storageclass should already be in state true
	I1210 01:09:41.336161  132605 host.go:66] Checking if "no-preload-584179" exists ...
	I1210 01:09:41.336395  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.336422  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.336596  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.336686  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:41.337074  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.337088  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.337468  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.337656  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.338494  132605 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:41.339130  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:41.339843  132605 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:09:41.339856  132605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 01:09:41.339870  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:41.341253  132605 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 01:09:37.793895  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:40.291282  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:41.342436  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.342604  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 01:09:41.342620  132605 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 01:09:41.342633  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:41.342844  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:41.342861  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.343122  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:41.343399  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:41.343569  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:41.343683  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:41.345344  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.345814  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:41.345834  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.345982  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:41.346159  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:41.346293  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:41.346431  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:41.352593  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38283
	I1210 01:09:41.352930  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.353292  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.353307  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.353545  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.354016  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.354045  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.370168  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46765
	I1210 01:09:41.370736  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.371289  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.371315  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.371670  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.371879  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.373679  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:41.374802  132605 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 01:09:41.374821  132605 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 01:09:41.374841  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:41.377611  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.378065  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:41.378089  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.378261  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:41.378411  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:41.378571  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:41.378711  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:41.492956  132605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:09:41.510713  132605 node_ready.go:35] waiting up to 6m0s for node "no-preload-584179" to be "Ready" ...
	I1210 01:09:41.591523  132605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:09:41.612369  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 01:09:41.612393  132605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 01:09:41.641040  132605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 01:09:41.672955  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 01:09:41.672982  132605 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 01:09:41.720885  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:09:41.720921  132605 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 01:09:41.773885  132605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:09:39.256804  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:41.758321  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:42.945125  132605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.304042618s)
	I1210 01:09:42.945192  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945207  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945233  132605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.171304002s)
	I1210 01:09:42.945292  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945310  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945452  132605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.353900883s)
	I1210 01:09:42.945476  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945488  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945543  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.945556  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.945587  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.945601  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.945609  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945616  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945819  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.945847  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.945832  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.945856  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945863  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945897  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.945907  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.945916  132605 addons.go:475] Verifying addon metrics-server=true in "no-preload-584179"
	I1210 01:09:42.945926  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.946083  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.946115  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.946120  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.946659  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.946679  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.946690  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.946699  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.946960  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.946976  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.954783  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.954805  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.955037  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.955056  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.955101  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.956592  132605 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1210 01:09:39.759613  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:40.260466  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:40.760527  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:41.260450  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:41.759950  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:42.260075  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:42.760661  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:43.259780  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:43.759690  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:44.260376  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:42.791249  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:45.290804  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:42.957891  132605 addons.go:510] duration metric: took 1.66252058s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I1210 01:09:43.514278  132605 node_ready.go:53] node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:45.514855  132605 node_ready.go:53] node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:44.256730  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:46.257699  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:44.759802  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:45.260533  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:45.760410  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:45.760500  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:45.797499  133241 cri.go:89] found id: ""
	I1210 01:09:45.797522  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.797533  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:45.797539  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:45.797596  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:45.827841  133241 cri.go:89] found id: ""
	I1210 01:09:45.827872  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.827885  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:45.827893  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:45.827952  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:45.861227  133241 cri.go:89] found id: ""
	I1210 01:09:45.861251  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.861259  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:45.861264  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:45.861331  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:45.895142  133241 cri.go:89] found id: ""
	I1210 01:09:45.895174  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.895185  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:45.895191  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:45.895266  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:45.931113  133241 cri.go:89] found id: ""
	I1210 01:09:45.931146  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.931157  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:45.931164  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:45.931251  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:45.964348  133241 cri.go:89] found id: ""
	I1210 01:09:45.964388  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.964396  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:45.964402  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:45.964453  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:45.997808  133241 cri.go:89] found id: ""
	I1210 01:09:45.997829  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.997837  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:45.997842  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:45.997888  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:46.028464  133241 cri.go:89] found id: ""
	I1210 01:09:46.028490  133241 logs.go:282] 0 containers: []
	W1210 01:09:46.028499  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:46.028508  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:46.028524  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:46.136225  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:46.136257  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:46.136275  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:46.211654  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:46.211686  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:46.254008  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:46.254046  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:46.305985  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:46.306020  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:48.818889  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:48.831511  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:48.831575  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:48.863536  133241 cri.go:89] found id: ""
	I1210 01:09:48.863566  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.863577  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:48.863585  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:48.863642  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:48.895340  133241 cri.go:89] found id: ""
	I1210 01:09:48.895362  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.895371  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:48.895378  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:48.895439  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:48.930962  133241 cri.go:89] found id: ""
	I1210 01:09:48.930989  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.930997  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:48.931003  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:48.931060  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:48.966437  133241 cri.go:89] found id: ""
	I1210 01:09:48.966467  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.966479  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:48.966488  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:48.966553  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:49.001290  133241 cri.go:89] found id: ""
	I1210 01:09:49.001321  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.001333  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:49.001340  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:49.001404  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:49.036472  133241 cri.go:89] found id: ""
	I1210 01:09:49.036499  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.036510  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:49.036532  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:49.036609  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:49.066550  133241 cri.go:89] found id: ""
	I1210 01:09:49.066589  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.066600  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:49.066607  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:49.066669  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:49.097358  133241 cri.go:89] found id: ""
	I1210 01:09:49.097383  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.097392  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:49.097402  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:49.097413  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:49.170082  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:49.170116  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:49.209684  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:49.209747  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:49.268714  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:49.268755  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:49.281979  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:49.282014  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:49.350901  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:47.790228  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:49.791158  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:48.014087  132605 node_ready.go:53] node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:49.014932  132605 node_ready.go:49] node "no-preload-584179" has status "Ready":"True"
	I1210 01:09:49.014960  132605 node_ready.go:38] duration metric: took 7.504211405s for node "no-preload-584179" to be "Ready" ...
	I1210 01:09:49.014974  132605 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:49.020519  132605 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:49.025466  132605 pod_ready.go:93] pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:49.025489  132605 pod_ready.go:82] duration metric: took 4.945455ms for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:49.025501  132605 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.031580  132605 pod_ready.go:103] pod "etcd-no-preload-584179" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:51.532544  132605 pod_ready.go:93] pod "etcd-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.532570  132605 pod_ready.go:82] duration metric: took 2.507060173s for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.532582  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.537498  132605 pod_ready.go:93] pod "kube-apiserver-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.537516  132605 pod_ready.go:82] duration metric: took 4.927374ms for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.537525  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.542147  132605 pod_ready.go:93] pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.542161  132605 pod_ready.go:82] duration metric: took 4.630752ms for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.542169  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.546645  132605 pod_ready.go:93] pod "kube-proxy-xcjs2" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.546660  132605 pod_ready.go:82] duration metric: took 4.486291ms for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.546667  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.815308  132605 pod_ready.go:93] pod "kube-scheduler-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.815333  132605 pod_ready.go:82] duration metric: took 268.661005ms for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.815343  132605 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:48.756571  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:51.256434  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:51.851559  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:51.864804  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:51.864862  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:51.907102  133241 cri.go:89] found id: ""
	I1210 01:09:51.907141  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.907154  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:51.907162  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:51.907218  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:51.937672  133241 cri.go:89] found id: ""
	I1210 01:09:51.937695  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.937702  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:51.937708  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:51.937755  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:51.966886  133241 cri.go:89] found id: ""
	I1210 01:09:51.966911  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.966919  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:51.966925  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:51.966981  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:51.996806  133241 cri.go:89] found id: ""
	I1210 01:09:51.996830  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.996838  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:51.996844  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:51.996901  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:52.028041  133241 cri.go:89] found id: ""
	I1210 01:09:52.028083  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.028091  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:52.028097  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:52.028150  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:52.057921  133241 cri.go:89] found id: ""
	I1210 01:09:52.057946  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.057954  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:52.057960  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:52.058010  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:52.088367  133241 cri.go:89] found id: ""
	I1210 01:09:52.088406  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.088415  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:52.088422  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:52.088487  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:52.117636  133241 cri.go:89] found id: ""
	I1210 01:09:52.117667  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.117679  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:52.117691  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:52.117705  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:52.151628  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:52.151655  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:52.202083  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:52.202116  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:52.214973  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:52.215009  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:52.282101  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:52.282126  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:52.282139  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:52.290617  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:54.790008  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:56.790504  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:53.820512  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:55.824852  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:53.258005  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:55.755992  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:54.862326  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:54.874349  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:54.874418  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:54.906983  133241 cri.go:89] found id: ""
	I1210 01:09:54.907006  133241 logs.go:282] 0 containers: []
	W1210 01:09:54.907013  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:54.907019  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:54.907069  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:54.938187  133241 cri.go:89] found id: ""
	I1210 01:09:54.938213  133241 logs.go:282] 0 containers: []
	W1210 01:09:54.938221  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:54.938226  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:54.938290  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:54.974481  133241 cri.go:89] found id: ""
	I1210 01:09:54.974514  133241 logs.go:282] 0 containers: []
	W1210 01:09:54.974526  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:54.974534  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:54.974619  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:55.005904  133241 cri.go:89] found id: ""
	I1210 01:09:55.005928  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.005941  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:55.005949  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:55.006015  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:55.037698  133241 cri.go:89] found id: ""
	I1210 01:09:55.037729  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.037741  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:55.037748  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:55.037816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:55.067926  133241 cri.go:89] found id: ""
	I1210 01:09:55.067958  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.067966  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:55.067971  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:55.068016  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:55.098309  133241 cri.go:89] found id: ""
	I1210 01:09:55.098333  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.098341  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:55.098349  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:55.098400  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:55.145177  133241 cri.go:89] found id: ""
	I1210 01:09:55.145212  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.145221  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:55.145231  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:55.145243  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:55.193307  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:55.193338  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:55.205536  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:55.205558  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:55.271248  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:55.271276  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:55.271295  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:55.349465  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:55.349503  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:57.887749  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:57.899698  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:57.899765  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:57.933170  133241 cri.go:89] found id: ""
	I1210 01:09:57.933196  133241 logs.go:282] 0 containers: []
	W1210 01:09:57.933206  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:57.933214  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:57.933282  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:57.964237  133241 cri.go:89] found id: ""
	I1210 01:09:57.964271  133241 logs.go:282] 0 containers: []
	W1210 01:09:57.964284  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:57.964292  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:57.964360  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:57.996447  133241 cri.go:89] found id: ""
	I1210 01:09:57.996481  133241 logs.go:282] 0 containers: []
	W1210 01:09:57.996493  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:57.996501  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:57.996562  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:58.030007  133241 cri.go:89] found id: ""
	I1210 01:09:58.030034  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.030046  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:58.030054  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:58.030120  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:58.063634  133241 cri.go:89] found id: ""
	I1210 01:09:58.063667  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.063678  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:58.063686  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:58.063748  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:58.095076  133241 cri.go:89] found id: ""
	I1210 01:09:58.095105  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.095114  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:58.095120  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:58.095177  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:58.127107  133241 cri.go:89] found id: ""
	I1210 01:09:58.127147  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.127160  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:58.127169  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:58.127243  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:58.158137  133241 cri.go:89] found id: ""
	I1210 01:09:58.158167  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.158177  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:58.158190  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:58.158213  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:58.209195  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:58.209236  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:58.221816  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:58.221841  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:58.290396  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:58.290416  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:58.290430  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:58.370235  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:58.370265  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:58.791561  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:01.290503  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:58.321571  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:00.322349  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:58.256526  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:00.756754  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:00.908076  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:00.920898  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:00.920985  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:00.955432  133241 cri.go:89] found id: ""
	I1210 01:10:00.955469  133241 logs.go:282] 0 containers: []
	W1210 01:10:00.955481  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:00.955490  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:00.955550  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:00.987580  133241 cri.go:89] found id: ""
	I1210 01:10:00.987606  133241 logs.go:282] 0 containers: []
	W1210 01:10:00.987615  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:00.987621  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:00.987670  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:01.018741  133241 cri.go:89] found id: ""
	I1210 01:10:01.018766  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.018773  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:01.018781  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:01.018840  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:01.049817  133241 cri.go:89] found id: ""
	I1210 01:10:01.049849  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.049860  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:01.049879  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:01.049946  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:01.081736  133241 cri.go:89] found id: ""
	I1210 01:10:01.081765  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.081775  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:01.081781  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:01.081829  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:01.110990  133241 cri.go:89] found id: ""
	I1210 01:10:01.111015  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.111026  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:01.111034  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:01.111096  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:01.140737  133241 cri.go:89] found id: ""
	I1210 01:10:01.140767  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.140777  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:01.140785  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:01.140848  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:01.170628  133241 cri.go:89] found id: ""
	I1210 01:10:01.170662  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.170674  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:01.170686  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:01.170701  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:01.222358  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:01.222389  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:01.235640  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:01.235668  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:01.302726  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:01.302745  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:01.302762  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:01.383817  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:01.383855  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:03.921112  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:03.933517  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:03.933592  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:03.967318  133241 cri.go:89] found id: ""
	I1210 01:10:03.967344  133241 logs.go:282] 0 containers: []
	W1210 01:10:03.967353  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:03.967358  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:03.967411  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:03.998743  133241 cri.go:89] found id: ""
	I1210 01:10:03.998768  133241 logs.go:282] 0 containers: []
	W1210 01:10:03.998776  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:03.998782  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:03.998842  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:04.033209  133241 cri.go:89] found id: ""
	I1210 01:10:04.033235  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.033247  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:04.033255  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:04.033319  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:04.064815  133241 cri.go:89] found id: ""
	I1210 01:10:04.064845  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.064857  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:04.064864  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:04.064921  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:04.098676  133241 cri.go:89] found id: ""
	I1210 01:10:04.098699  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.098707  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:04.098712  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:04.098763  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:04.129693  133241 cri.go:89] found id: ""
	I1210 01:10:04.129720  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.129732  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:04.129741  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:04.129809  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:04.162158  133241 cri.go:89] found id: ""
	I1210 01:10:04.162195  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.162203  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:04.162209  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:04.162276  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:04.194376  133241 cri.go:89] found id: ""
	I1210 01:10:04.194425  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.194436  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:04.194446  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:04.194462  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:04.246674  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:04.246702  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:04.259142  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:04.259169  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:04.330034  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:04.330054  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:04.330067  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:04.410042  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:04.410089  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:03.790690  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:06.290723  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:02.821628  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:04.822691  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:06.823821  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:03.256410  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:05.756520  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:06.948623  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:06.960727  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:06.960811  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:06.993176  133241 cri.go:89] found id: ""
	I1210 01:10:06.993217  133241 logs.go:282] 0 containers: []
	W1210 01:10:06.993226  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:06.993231  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:06.993285  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:07.026420  133241 cri.go:89] found id: ""
	I1210 01:10:07.026449  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.026462  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:07.026469  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:07.026541  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:07.060810  133241 cri.go:89] found id: ""
	I1210 01:10:07.060837  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.060847  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:07.060855  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:07.060921  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:07.091336  133241 cri.go:89] found id: ""
	I1210 01:10:07.091376  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.091386  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:07.091393  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:07.091510  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:07.122715  133241 cri.go:89] found id: ""
	I1210 01:10:07.122750  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.122762  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:07.122770  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:07.122822  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:07.154444  133241 cri.go:89] found id: ""
	I1210 01:10:07.154479  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.154490  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:07.154496  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:07.154575  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:07.189571  133241 cri.go:89] found id: ""
	I1210 01:10:07.189601  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.189614  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:07.189622  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:07.189683  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:07.224455  133241 cri.go:89] found id: ""
	I1210 01:10:07.224480  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.224489  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:07.224499  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:07.224512  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:07.240174  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:07.240214  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:07.344027  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:07.344062  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:07.344079  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:07.445219  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:07.445263  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:07.483205  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:07.483238  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:08.291335  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:10.789606  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:09.321098  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:11.321721  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:08.256670  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:10.256954  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:12.257117  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:10.034238  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:10.047042  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:10.047105  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:10.078622  133241 cri.go:89] found id: ""
	I1210 01:10:10.078654  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.078666  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:10.078675  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:10.078737  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:10.109353  133241 cri.go:89] found id: ""
	I1210 01:10:10.109379  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.109390  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:10.109398  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:10.109470  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:10.143036  133241 cri.go:89] found id: ""
	I1210 01:10:10.143065  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.143077  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:10.143084  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:10.143150  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:10.174938  133241 cri.go:89] found id: ""
	I1210 01:10:10.174966  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.174975  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:10.174981  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:10.175032  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:10.208680  133241 cri.go:89] found id: ""
	I1210 01:10:10.208709  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.208718  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:10.208724  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:10.208793  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:10.241153  133241 cri.go:89] found id: ""
	I1210 01:10:10.241189  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.241202  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:10.241213  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:10.241290  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:10.279405  133241 cri.go:89] found id: ""
	I1210 01:10:10.279437  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.279448  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:10.279457  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:10.279523  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:10.317915  133241 cri.go:89] found id: ""
	I1210 01:10:10.317943  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.317953  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:10.317964  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:10.317980  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:10.370920  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:10.370955  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:10.385823  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:10.385867  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:10.452746  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:10.452774  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:10.452793  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:10.535218  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:10.535291  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:13.075172  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:13.090707  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:13.090785  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:13.141780  133241 cri.go:89] found id: ""
	I1210 01:10:13.141804  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.141812  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:13.141818  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:13.141869  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:13.172241  133241 cri.go:89] found id: ""
	I1210 01:10:13.172263  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.172271  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:13.172277  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:13.172339  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:13.200378  133241 cri.go:89] found id: ""
	I1210 01:10:13.200401  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.200410  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:13.200415  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:13.200472  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:13.232921  133241 cri.go:89] found id: ""
	I1210 01:10:13.232952  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.232964  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:13.232972  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:13.233088  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:13.265305  133241 cri.go:89] found id: ""
	I1210 01:10:13.265333  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.265344  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:13.265352  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:13.265411  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:13.299192  133241 cri.go:89] found id: ""
	I1210 01:10:13.299216  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.299226  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:13.299233  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:13.299306  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:13.332156  133241 cri.go:89] found id: ""
	I1210 01:10:13.332184  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.332195  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:13.332202  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:13.332259  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:13.365450  133241 cri.go:89] found id: ""
	I1210 01:10:13.365484  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.365498  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:13.365511  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:13.365529  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:13.440807  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:13.440849  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:13.477283  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:13.477325  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:13.527481  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:13.527514  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:13.540146  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:13.540178  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:13.602711  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:12.790714  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:15.290963  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:13.820293  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:15.821845  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:14.755454  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:16.756574  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:16.103789  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:16.116124  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:16.116204  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:16.153057  133241 cri.go:89] found id: ""
	I1210 01:10:16.153082  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.153102  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:16.153109  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:16.153162  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:16.186489  133241 cri.go:89] found id: ""
	I1210 01:10:16.186517  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.186528  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:16.186535  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:16.186613  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:16.216369  133241 cri.go:89] found id: ""
	I1210 01:10:16.216404  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.216415  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:16.216423  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:16.216482  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:16.246254  133241 cri.go:89] found id: ""
	I1210 01:10:16.246282  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.246292  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:16.246299  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:16.246361  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:16.277815  133241 cri.go:89] found id: ""
	I1210 01:10:16.277844  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.277855  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:16.277866  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:16.277931  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:16.312101  133241 cri.go:89] found id: ""
	I1210 01:10:16.312132  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.312141  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:16.312147  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:16.312202  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:16.350273  133241 cri.go:89] found id: ""
	I1210 01:10:16.350299  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.350307  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:16.350313  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:16.350376  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:16.388091  133241 cri.go:89] found id: ""
	I1210 01:10:16.388113  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.388121  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:16.388130  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:16.388150  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:16.456039  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:16.456066  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:16.456085  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:16.534919  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:16.534950  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:16.581598  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:16.581639  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:16.631479  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:16.631515  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:19.143852  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:19.156229  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:19.156300  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:19.186482  133241 cri.go:89] found id: ""
	I1210 01:10:19.186506  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.186514  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:19.186521  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:19.186585  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:19.216945  133241 cri.go:89] found id: ""
	I1210 01:10:19.216967  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.216975  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:19.216983  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:19.217060  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:19.247628  133241 cri.go:89] found id: ""
	I1210 01:10:19.247656  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.247666  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:19.247672  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:19.247719  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:19.281256  133241 cri.go:89] found id: ""
	I1210 01:10:19.281287  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.281297  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:19.281303  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:19.281364  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:19.315123  133241 cri.go:89] found id: ""
	I1210 01:10:19.315156  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.315168  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:19.315176  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:19.315246  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:19.349687  133241 cri.go:89] found id: ""
	I1210 01:10:19.349714  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.349725  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:19.349733  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:19.349797  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:19.381019  133241 cri.go:89] found id: ""
	I1210 01:10:19.381046  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.381058  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:19.381065  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:19.381129  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:19.413983  133241 cri.go:89] found id: ""
	I1210 01:10:19.414023  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.414035  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:19.414048  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:19.414063  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:19.453812  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:19.453848  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:19.504016  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:19.504049  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:19.517665  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:19.517695  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:19.583777  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:19.583807  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:19.583825  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:17.790792  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:20.290934  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:17.821893  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:20.320787  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:19.256192  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:21.256740  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:22.160219  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:22.172908  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:22.172984  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:22.203634  133241 cri.go:89] found id: ""
	I1210 01:10:22.203665  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.203680  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:22.203689  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:22.203754  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:22.233632  133241 cri.go:89] found id: ""
	I1210 01:10:22.233660  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.233671  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:22.233679  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:22.233748  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:22.269679  133241 cri.go:89] found id: ""
	I1210 01:10:22.269704  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.269713  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:22.269719  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:22.269769  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:22.301819  133241 cri.go:89] found id: ""
	I1210 01:10:22.301850  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.301858  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:22.301864  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:22.301914  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:22.337435  133241 cri.go:89] found id: ""
	I1210 01:10:22.337470  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.337479  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:22.337494  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:22.337562  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:22.368920  133241 cri.go:89] found id: ""
	I1210 01:10:22.368944  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.368952  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:22.368957  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:22.369020  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:22.401157  133241 cri.go:89] found id: ""
	I1210 01:10:22.401188  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.401200  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:22.401211  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:22.401277  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:22.436278  133241 cri.go:89] found id: ""
	I1210 01:10:22.436317  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.436330  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:22.436343  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:22.436359  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:22.485320  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:22.485354  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:22.498225  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:22.498253  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:22.559918  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:22.559944  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:22.559961  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:22.636884  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:22.636919  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:22.291705  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:24.790056  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:26.790414  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:22.322051  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:24.821800  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:23.756797  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:25.757544  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:25.173302  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:25.185398  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:25.185481  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:25.215003  133241 cri.go:89] found id: ""
	I1210 01:10:25.215030  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.215038  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:25.215044  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:25.215106  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:25.247583  133241 cri.go:89] found id: ""
	I1210 01:10:25.247604  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.247613  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:25.247620  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:25.247679  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:25.282125  133241 cri.go:89] found id: ""
	I1210 01:10:25.282150  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.282158  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:25.282163  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:25.282220  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:25.317560  133241 cri.go:89] found id: ""
	I1210 01:10:25.317590  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.317599  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:25.317605  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:25.317666  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:25.354392  133241 cri.go:89] found id: ""
	I1210 01:10:25.354418  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.354430  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:25.354441  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:25.354510  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:25.392349  133241 cri.go:89] found id: ""
	I1210 01:10:25.392375  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.392384  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:25.392390  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:25.392442  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:25.429665  133241 cri.go:89] found id: ""
	I1210 01:10:25.429692  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.429702  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:25.429709  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:25.429766  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:25.466437  133241 cri.go:89] found id: ""
	I1210 01:10:25.466463  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.466476  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:25.466488  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:25.466503  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:25.480846  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:25.480885  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:25.548828  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:25.548861  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:25.548877  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:25.626942  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:25.626985  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:25.664081  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:25.664120  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:28.219032  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:28.233820  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:28.233886  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:28.267033  133241 cri.go:89] found id: ""
	I1210 01:10:28.267061  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.267072  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:28.267079  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:28.267133  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:28.304241  133241 cri.go:89] found id: ""
	I1210 01:10:28.304268  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.304276  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:28.304282  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:28.304329  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:28.339783  133241 cri.go:89] found id: ""
	I1210 01:10:28.339810  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.339817  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:28.339824  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:28.339897  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:28.371890  133241 cri.go:89] found id: ""
	I1210 01:10:28.371944  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.371957  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:28.371965  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:28.372033  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:28.409995  133241 cri.go:89] found id: ""
	I1210 01:10:28.410031  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.410042  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:28.410050  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:28.410122  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:28.443817  133241 cri.go:89] found id: ""
	I1210 01:10:28.443854  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.443866  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:28.443874  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:28.443943  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:28.476813  133241 cri.go:89] found id: ""
	I1210 01:10:28.476842  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.476850  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:28.476856  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:28.476918  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:28.509092  133241 cri.go:89] found id: ""
	I1210 01:10:28.509119  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.509129  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:28.509147  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:28.509166  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:28.582990  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:28.583021  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:28.624120  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:28.624152  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:28.673901  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:28.673942  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:28.686654  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:28.686684  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:28.754914  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:28.790925  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:31.291799  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:27.321458  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:29.820474  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:31.820865  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:28.257390  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:30.757194  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:31.256019  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:31.269297  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:31.269374  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:31.306032  133241 cri.go:89] found id: ""
	I1210 01:10:31.306063  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.306074  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:31.306082  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:31.306149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:31.339930  133241 cri.go:89] found id: ""
	I1210 01:10:31.339964  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.339976  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:31.339984  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:31.340049  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:31.371820  133241 cri.go:89] found id: ""
	I1210 01:10:31.371853  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.371865  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:31.371872  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:31.371929  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:31.406853  133241 cri.go:89] found id: ""
	I1210 01:10:31.406880  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.406888  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:31.406895  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:31.406973  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:31.441927  133241 cri.go:89] found id: ""
	I1210 01:10:31.441964  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.441983  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:31.441993  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:31.442059  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:31.475302  133241 cri.go:89] found id: ""
	I1210 01:10:31.475335  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.475347  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:31.475356  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:31.475422  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:31.508445  133241 cri.go:89] found id: ""
	I1210 01:10:31.508479  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.508489  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:31.508495  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:31.508549  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:31.542658  133241 cri.go:89] found id: ""
	I1210 01:10:31.542686  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.542694  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:31.542704  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:31.542720  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:31.591393  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:31.591432  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:31.604124  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:31.604152  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:31.670342  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:31.670381  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:31.670401  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:31.755216  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:31.755273  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:34.307218  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:34.321878  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:34.321951  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:34.355191  133241 cri.go:89] found id: ""
	I1210 01:10:34.355230  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.355238  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:34.355244  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:34.355300  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:34.392397  133241 cri.go:89] found id: ""
	I1210 01:10:34.392432  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.392445  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:34.392453  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:34.392522  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:34.424468  133241 cri.go:89] found id: ""
	I1210 01:10:34.424496  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.424513  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:34.424519  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:34.424568  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:34.456966  133241 cri.go:89] found id: ""
	I1210 01:10:34.456990  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.457000  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:34.457006  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:34.457057  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:34.491830  133241 cri.go:89] found id: ""
	I1210 01:10:34.491863  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.491874  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:34.491882  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:34.491949  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:34.523409  133241 cri.go:89] found id: ""
	I1210 01:10:34.523441  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.523455  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:34.523464  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:34.523520  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:34.555092  133241 cri.go:89] found id: ""
	I1210 01:10:34.555125  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.555136  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:34.555143  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:34.555211  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:34.585491  133241 cri.go:89] found id: ""
	I1210 01:10:34.585521  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.585530  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:34.585540  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:34.585553  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:34.598250  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:34.598281  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:10:33.790899  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:35.791148  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:34.321870  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:36.821430  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:32.757323  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:35.256735  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:37.257310  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	W1210 01:10:34.662759  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:34.662784  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:34.662797  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:34.740495  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:34.740537  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:34.777192  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:34.777231  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:37.329212  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:37.342322  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:37.342397  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:37.374083  133241 cri.go:89] found id: ""
	I1210 01:10:37.374114  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.374124  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:37.374133  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:37.374202  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:37.404838  133241 cri.go:89] found id: ""
	I1210 01:10:37.404872  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.404880  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:37.404886  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:37.404948  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:37.439471  133241 cri.go:89] found id: ""
	I1210 01:10:37.439503  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.439515  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:37.439523  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:37.439598  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:37.473725  133241 cri.go:89] found id: ""
	I1210 01:10:37.473756  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.473765  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:37.473770  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:37.473822  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:37.507449  133241 cri.go:89] found id: ""
	I1210 01:10:37.507478  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.507491  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:37.507498  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:37.507565  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:37.538432  133241 cri.go:89] found id: ""
	I1210 01:10:37.538468  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.538479  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:37.538490  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:37.538583  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:37.571690  133241 cri.go:89] found id: ""
	I1210 01:10:37.571716  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.571724  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:37.571730  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:37.571787  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:37.606988  133241 cri.go:89] found id: ""
	I1210 01:10:37.607017  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.607026  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:37.607036  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:37.607048  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:37.655260  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:37.655290  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:37.667647  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:37.667672  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:37.734898  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:37.734955  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:37.734971  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:37.823654  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:37.823690  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:37.792020  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:40.290220  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:39.323412  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:41.822486  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:39.759358  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:42.256854  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:40.361513  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:40.374995  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:40.375054  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:40.407043  133241 cri.go:89] found id: ""
	I1210 01:10:40.407077  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.407086  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:40.407091  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:40.407146  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:40.438613  133241 cri.go:89] found id: ""
	I1210 01:10:40.438644  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.438655  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:40.438663  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:40.438725  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:40.468747  133241 cri.go:89] found id: ""
	I1210 01:10:40.468781  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.468794  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:40.468801  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:40.468873  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:40.501670  133241 cri.go:89] found id: ""
	I1210 01:10:40.501700  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.501708  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:40.501714  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:40.501762  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:40.531671  133241 cri.go:89] found id: ""
	I1210 01:10:40.531694  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.531704  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:40.531712  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:40.531769  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:40.562804  133241 cri.go:89] found id: ""
	I1210 01:10:40.562827  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.562836  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:40.562847  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:40.562909  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:40.593286  133241 cri.go:89] found id: ""
	I1210 01:10:40.593309  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.593318  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:40.593323  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:40.593369  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:40.624387  133241 cri.go:89] found id: ""
	I1210 01:10:40.624424  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.624438  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:40.624452  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:40.624479  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:40.636616  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:40.636643  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:40.703044  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:40.703071  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:40.703089  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:40.782186  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:40.782220  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:40.824410  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:40.824434  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:43.377460  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:43.391624  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:43.391704  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:43.424454  133241 cri.go:89] found id: ""
	I1210 01:10:43.424489  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.424499  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:43.424505  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:43.424570  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:43.454067  133241 cri.go:89] found id: ""
	I1210 01:10:43.454094  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.454102  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:43.454108  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:43.454160  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:43.485905  133241 cri.go:89] found id: ""
	I1210 01:10:43.485938  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.485949  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:43.485956  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:43.486021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:43.516402  133241 cri.go:89] found id: ""
	I1210 01:10:43.516427  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.516435  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:43.516447  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:43.516521  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:43.549049  133241 cri.go:89] found id: ""
	I1210 01:10:43.549102  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.549114  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:43.549124  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:43.549181  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:43.582610  133241 cri.go:89] found id: ""
	I1210 01:10:43.582641  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.582652  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:43.582661  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:43.582720  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:43.614392  133241 cri.go:89] found id: ""
	I1210 01:10:43.614424  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.614435  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:43.614442  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:43.614507  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:43.646797  133241 cri.go:89] found id: ""
	I1210 01:10:43.646830  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.646842  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:43.646855  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:43.646872  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:43.682884  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:43.682921  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:43.739117  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:43.739159  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:43.754008  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:43.754047  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:43.825110  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:43.825140  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:43.825156  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:42.290697  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:44.790711  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.791942  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:44.321563  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.821954  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:44.756178  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.757399  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.401040  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:46.414417  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:46.414515  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:46.446832  133241 cri.go:89] found id: ""
	I1210 01:10:46.446861  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.446871  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:46.446879  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:46.446945  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:46.480534  133241 cri.go:89] found id: ""
	I1210 01:10:46.480566  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.480577  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:46.480584  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:46.480649  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:46.512706  133241 cri.go:89] found id: ""
	I1210 01:10:46.512735  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.512745  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:46.512752  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:46.512818  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:46.545769  133241 cri.go:89] found id: ""
	I1210 01:10:46.545803  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.545815  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:46.545823  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:46.545889  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:46.575715  133241 cri.go:89] found id: ""
	I1210 01:10:46.575750  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.575762  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:46.575769  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:46.575834  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:46.605133  133241 cri.go:89] found id: ""
	I1210 01:10:46.605164  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.605175  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:46.605183  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:46.605235  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:46.635536  133241 cri.go:89] found id: ""
	I1210 01:10:46.635571  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.635582  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:46.635589  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:46.635650  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:46.665579  133241 cri.go:89] found id: ""
	I1210 01:10:46.665608  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.665617  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:46.665627  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:46.665637  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:46.749766  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:46.749806  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:46.788690  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:46.788725  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:46.841860  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:46.841888  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:46.870621  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:46.870651  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:46.943532  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:49.444707  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:49.457003  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:49.457071  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:49.489757  133241 cri.go:89] found id: ""
	I1210 01:10:49.489791  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.489802  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:49.489809  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:49.489859  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:49.519808  133241 cri.go:89] found id: ""
	I1210 01:10:49.519832  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.519839  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:49.519844  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:49.519895  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:49.552725  133241 cri.go:89] found id: ""
	I1210 01:10:49.552748  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.552756  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:49.552762  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:49.552816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:49.583657  133241 cri.go:89] found id: ""
	I1210 01:10:49.583686  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.583699  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:49.583710  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:49.583771  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:49.614520  133241 cri.go:89] found id: ""
	I1210 01:10:49.614547  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.614569  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:49.614579  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:49.614644  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:49.290385  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:51.291504  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:49.321277  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:51.321612  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:49.256723  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:51.257348  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:49.646739  133241 cri.go:89] found id: ""
	I1210 01:10:49.646788  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.646800  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:49.646811  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:49.646871  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:49.680156  133241 cri.go:89] found id: ""
	I1210 01:10:49.680184  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.680195  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:49.680203  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:49.680271  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:49.711052  133241 cri.go:89] found id: ""
	I1210 01:10:49.711090  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.711103  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:49.711115  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:49.711133  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:49.765139  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:49.765173  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:49.777581  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:49.777612  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:49.842857  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:49.842882  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:49.842897  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:49.923492  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:49.923529  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:52.465282  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:52.478468  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:52.478535  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:52.514379  133241 cri.go:89] found id: ""
	I1210 01:10:52.514411  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.514420  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:52.514426  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:52.514481  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:52.545952  133241 cri.go:89] found id: ""
	I1210 01:10:52.545981  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.545991  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:52.545999  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:52.546063  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:52.581959  133241 cri.go:89] found id: ""
	I1210 01:10:52.581986  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.581995  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:52.582003  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:52.582109  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:52.634648  133241 cri.go:89] found id: ""
	I1210 01:10:52.634674  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.634686  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:52.634693  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:52.634753  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:52.668485  133241 cri.go:89] found id: ""
	I1210 01:10:52.668509  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.668518  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:52.668524  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:52.668587  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:52.702030  133241 cri.go:89] found id: ""
	I1210 01:10:52.702058  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.702067  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:52.702074  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:52.702139  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:52.736618  133241 cri.go:89] found id: ""
	I1210 01:10:52.736647  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.736655  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:52.736661  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:52.736728  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:52.769400  133241 cri.go:89] found id: ""
	I1210 01:10:52.769427  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.769436  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:52.769444  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:52.769462  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:52.808900  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:52.808936  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:52.861032  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:52.861067  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:52.874251  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:52.874281  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:52.946117  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:52.946145  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:52.946174  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:53.790452  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:55.791486  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:53.820716  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:55.822118  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:53.756664  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:56.255828  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:55.526812  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:55.541146  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:55.541232  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:55.582382  133241 cri.go:89] found id: ""
	I1210 01:10:55.582414  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.582424  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:55.582430  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:55.582483  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:55.620756  133241 cri.go:89] found id: ""
	I1210 01:10:55.620781  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.620790  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:55.620795  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:55.620865  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:55.657136  133241 cri.go:89] found id: ""
	I1210 01:10:55.657173  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.657184  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:55.657192  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:55.657253  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:55.691809  133241 cri.go:89] found id: ""
	I1210 01:10:55.691836  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.691844  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:55.691850  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:55.691901  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:55.725747  133241 cri.go:89] found id: ""
	I1210 01:10:55.725782  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.725794  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:55.725802  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:55.725870  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:55.758656  133241 cri.go:89] found id: ""
	I1210 01:10:55.758686  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.758697  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:55.758704  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:55.758766  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:55.791407  133241 cri.go:89] found id: ""
	I1210 01:10:55.791437  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.791447  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:55.791453  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:55.791522  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:55.823238  133241 cri.go:89] found id: ""
	I1210 01:10:55.823259  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.823269  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:55.823277  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:55.823288  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:55.858051  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:55.858090  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:55.910896  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:55.910928  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:55.923792  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:55.923814  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:55.994264  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:55.994283  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:55.994297  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:58.570410  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:58.582632  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:58.582709  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:58.614706  133241 cri.go:89] found id: ""
	I1210 01:10:58.614741  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.614752  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:58.614759  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:58.614820  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:58.645853  133241 cri.go:89] found id: ""
	I1210 01:10:58.645880  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.645888  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:58.645893  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:58.645946  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:58.681278  133241 cri.go:89] found id: ""
	I1210 01:10:58.681305  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.681313  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:58.681319  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:58.681376  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:58.715312  133241 cri.go:89] found id: ""
	I1210 01:10:58.715344  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.715356  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:58.715364  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:58.715434  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:58.753150  133241 cri.go:89] found id: ""
	I1210 01:10:58.753182  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.753193  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:58.753201  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:58.753275  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:58.792337  133241 cri.go:89] found id: ""
	I1210 01:10:58.792363  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.792371  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:58.792377  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:58.792424  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:58.824538  133241 cri.go:89] found id: ""
	I1210 01:10:58.824562  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.824569  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:58.824575  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:58.824626  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:58.859699  133241 cri.go:89] found id: ""
	I1210 01:10:58.859733  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.859745  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:58.859755  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:58.859768  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:58.874557  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:58.874607  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:58.942377  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:58.942399  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:58.942413  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:59.020700  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:59.020743  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:59.092780  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:59.092820  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:58.290069  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:00.290277  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:58.321783  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:00.820779  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:58.256816  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:00.756307  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:01.656942  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:01.670706  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:01.670790  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:01.704182  133241 cri.go:89] found id: ""
	I1210 01:11:01.704222  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.704235  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:01.704242  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:01.704295  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:01.737176  133241 cri.go:89] found id: ""
	I1210 01:11:01.737207  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.737216  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:01.737222  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:01.737279  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:01.771891  133241 cri.go:89] found id: ""
	I1210 01:11:01.771924  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.771935  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:01.771943  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:01.772001  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:01.804964  133241 cri.go:89] found id: ""
	I1210 01:11:01.804994  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.805005  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:01.805026  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:01.805101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:01.837156  133241 cri.go:89] found id: ""
	I1210 01:11:01.837184  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.837195  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:01.837203  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:01.837260  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:01.866759  133241 cri.go:89] found id: ""
	I1210 01:11:01.866783  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.866793  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:01.866802  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:01.866868  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:01.897349  133241 cri.go:89] found id: ""
	I1210 01:11:01.897377  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.897387  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:01.897394  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:01.897452  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:01.928390  133241 cri.go:89] found id: ""
	I1210 01:11:01.928419  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.928430  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:01.928442  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:01.928462  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:01.995531  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:01.995558  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:01.995572  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:02.073144  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:02.073178  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:02.107235  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:02.107266  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:02.159959  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:02.159993  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:02.789938  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:04.790544  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:02.821058  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:04.822126  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:02.756968  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:05.255943  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:07.256779  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:04.672775  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:04.686495  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:04.686604  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:04.720867  133241 cri.go:89] found id: ""
	I1210 01:11:04.720977  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.721005  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:04.721034  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:04.721143  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:04.757796  133241 cri.go:89] found id: ""
	I1210 01:11:04.757823  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.757831  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:04.757837  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:04.757896  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:04.799823  133241 cri.go:89] found id: ""
	I1210 01:11:04.799848  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.799856  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:04.799861  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:04.799921  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:04.848259  133241 cri.go:89] found id: ""
	I1210 01:11:04.848291  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.848303  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:04.848312  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:04.848392  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:04.898530  133241 cri.go:89] found id: ""
	I1210 01:11:04.898583  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.898596  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:04.898605  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:04.898673  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:04.935954  133241 cri.go:89] found id: ""
	I1210 01:11:04.935979  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.935987  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:04.935992  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:04.936037  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:04.970503  133241 cri.go:89] found id: ""
	I1210 01:11:04.970531  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.970538  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:04.970544  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:04.970627  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:05.003257  133241 cri.go:89] found id: ""
	I1210 01:11:05.003280  133241 logs.go:282] 0 containers: []
	W1210 01:11:05.003289  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:05.003298  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:05.003311  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:05.053816  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:05.053849  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:05.066024  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:05.066056  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:05.129515  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:05.129542  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:05.129559  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:05.203823  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:05.203861  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:07.743773  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:07.756948  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:07.757021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:07.790298  133241 cri.go:89] found id: ""
	I1210 01:11:07.790326  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.790334  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:07.790341  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:07.790432  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:07.822653  133241 cri.go:89] found id: ""
	I1210 01:11:07.822682  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.822693  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:07.822700  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:07.822754  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:07.856125  133241 cri.go:89] found id: ""
	I1210 01:11:07.856160  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.856171  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:07.856178  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:07.856247  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:07.888297  133241 cri.go:89] found id: ""
	I1210 01:11:07.888321  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.888329  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:07.888336  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:07.888394  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:07.919131  133241 cri.go:89] found id: ""
	I1210 01:11:07.919159  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.919170  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:07.919177  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:07.919245  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:07.954289  133241 cri.go:89] found id: ""
	I1210 01:11:07.954320  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.954332  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:07.954340  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:07.954396  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:07.985447  133241 cri.go:89] found id: ""
	I1210 01:11:07.985482  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.985497  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:07.985505  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:07.985560  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:08.016461  133241 cri.go:89] found id: ""
	I1210 01:11:08.016491  133241 logs.go:282] 0 containers: []
	W1210 01:11:08.016504  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:08.016516  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:08.016534  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:08.051346  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:08.051386  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:08.101708  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:08.101741  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:08.113883  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:08.113912  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:08.174656  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:08.174681  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:08.174696  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:07.289462  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:09.290707  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:11.790555  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:07.322137  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:09.821004  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:11.821064  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:09.757877  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:12.256156  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:10.751754  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:10.768007  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:10.768071  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:10.814141  133241 cri.go:89] found id: ""
	I1210 01:11:10.814167  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.814177  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:10.814187  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:10.814255  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:10.864355  133241 cri.go:89] found id: ""
	I1210 01:11:10.864379  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.864387  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:10.864392  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:10.864464  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:10.917533  133241 cri.go:89] found id: ""
	I1210 01:11:10.917563  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.917572  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:10.917579  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:10.917644  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:10.949555  133241 cri.go:89] found id: ""
	I1210 01:11:10.949589  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.949601  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:10.949609  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:10.949668  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:10.982997  133241 cri.go:89] found id: ""
	I1210 01:11:10.983022  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.983030  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:10.983036  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:10.983101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:11.016318  133241 cri.go:89] found id: ""
	I1210 01:11:11.016348  133241 logs.go:282] 0 containers: []
	W1210 01:11:11.016359  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:11.016366  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:11.016460  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:11.045980  133241 cri.go:89] found id: ""
	I1210 01:11:11.046004  133241 logs.go:282] 0 containers: []
	W1210 01:11:11.046012  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:11.046018  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:11.046067  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:11.074303  133241 cri.go:89] found id: ""
	I1210 01:11:11.074329  133241 logs.go:282] 0 containers: []
	W1210 01:11:11.074336  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:11.074346  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:11.074357  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:11.108874  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:11.108907  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:11.156642  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:11.156672  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:11.168505  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:11.168527  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:11.239949  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:11.239976  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:11.239994  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:13.828538  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:13.841876  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:13.841929  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:13.872854  133241 cri.go:89] found id: ""
	I1210 01:11:13.872884  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.872896  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:13.872904  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:13.872955  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:13.903759  133241 cri.go:89] found id: ""
	I1210 01:11:13.903790  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.903803  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:13.903812  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:13.903877  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:13.938898  133241 cri.go:89] found id: ""
	I1210 01:11:13.938921  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.938929  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:13.938934  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:13.938992  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:13.979322  133241 cri.go:89] found id: ""
	I1210 01:11:13.979343  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.979351  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:13.979358  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:13.979419  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:14.012959  133241 cri.go:89] found id: ""
	I1210 01:11:14.012984  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.012993  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:14.012999  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:14.013048  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:14.050248  133241 cri.go:89] found id: ""
	I1210 01:11:14.050274  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.050282  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:14.050288  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:14.050337  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:14.086029  133241 cri.go:89] found id: ""
	I1210 01:11:14.086061  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.086072  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:14.086080  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:14.086149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:14.119966  133241 cri.go:89] found id: ""
	I1210 01:11:14.119994  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.120002  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:14.120012  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:14.120025  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:14.133378  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:14.133406  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:14.199060  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:14.199093  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:14.199108  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:14.282056  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:14.282089  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:14.321155  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:14.321182  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:13.790898  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.290292  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:13.821872  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.320917  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:14.257094  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.755448  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.871040  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:16.882350  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:16.882417  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:16.911877  133241 cri.go:89] found id: ""
	I1210 01:11:16.911910  133241 logs.go:282] 0 containers: []
	W1210 01:11:16.911922  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:16.911930  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:16.911993  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:16.946898  133241 cri.go:89] found id: ""
	I1210 01:11:16.946931  133241 logs.go:282] 0 containers: []
	W1210 01:11:16.946945  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:16.946952  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:16.947021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:16.979154  133241 cri.go:89] found id: ""
	I1210 01:11:16.979185  133241 logs.go:282] 0 containers: []
	W1210 01:11:16.979196  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:16.979209  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:16.979293  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:17.008977  133241 cri.go:89] found id: ""
	I1210 01:11:17.009010  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.009021  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:17.009028  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:17.009093  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:17.041399  133241 cri.go:89] found id: ""
	I1210 01:11:17.041431  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.041440  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:17.041446  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:17.041505  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:17.074254  133241 cri.go:89] found id: ""
	I1210 01:11:17.074284  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.074295  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:17.074305  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:17.074385  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:17.104982  133241 cri.go:89] found id: ""
	I1210 01:11:17.105015  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.105025  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:17.105033  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:17.105094  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:17.135240  133241 cri.go:89] found id: ""
	I1210 01:11:17.135265  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.135275  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:17.135286  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:17.135298  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:17.186952  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:17.187004  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:17.201444  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:17.201472  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:17.272210  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:17.272229  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:17.272245  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:17.355218  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:17.355256  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:18.290407  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:20.292289  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:18.321390  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:20.321550  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:18.756823  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:21.256882  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:19.892863  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:19.905069  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:19.905138  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:19.943515  133241 cri.go:89] found id: ""
	I1210 01:11:19.943544  133241 logs.go:282] 0 containers: []
	W1210 01:11:19.943557  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:19.943566  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:19.943629  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:19.974474  133241 cri.go:89] found id: ""
	I1210 01:11:19.974499  133241 logs.go:282] 0 containers: []
	W1210 01:11:19.974509  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:19.974517  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:19.974597  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:20.008980  133241 cri.go:89] found id: ""
	I1210 01:11:20.009011  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.009023  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:20.009030  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:20.009097  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:20.040655  133241 cri.go:89] found id: ""
	I1210 01:11:20.040681  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.040690  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:20.040696  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:20.040745  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:20.073761  133241 cri.go:89] found id: ""
	I1210 01:11:20.073788  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.073799  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:20.073806  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:20.073873  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:20.104381  133241 cri.go:89] found id: ""
	I1210 01:11:20.104410  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.104421  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:20.104429  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:20.104489  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:20.138130  133241 cri.go:89] found id: ""
	I1210 01:11:20.138158  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.138167  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:20.138173  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:20.138229  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:20.166883  133241 cri.go:89] found id: ""
	I1210 01:11:20.166908  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.166916  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:20.166926  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:20.166940  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:20.199437  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:20.199470  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:20.247384  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:20.247418  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:20.260363  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:20.260392  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:20.330260  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:20.330283  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:20.330299  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:22.912818  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:22.925241  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:22.925316  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:22.957975  133241 cri.go:89] found id: ""
	I1210 01:11:22.958003  133241 logs.go:282] 0 containers: []
	W1210 01:11:22.958015  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:22.958023  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:22.958087  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:22.991067  133241 cri.go:89] found id: ""
	I1210 01:11:22.991098  133241 logs.go:282] 0 containers: []
	W1210 01:11:22.991109  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:22.991117  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:22.991177  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:23.022191  133241 cri.go:89] found id: ""
	I1210 01:11:23.022280  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.022297  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:23.022307  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:23.022373  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:23.055399  133241 cri.go:89] found id: ""
	I1210 01:11:23.055427  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.055435  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:23.055440  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:23.055504  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:23.085084  133241 cri.go:89] found id: ""
	I1210 01:11:23.085114  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.085126  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:23.085133  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:23.085195  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:23.114896  133241 cri.go:89] found id: ""
	I1210 01:11:23.114921  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.114929  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:23.114935  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:23.114995  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:23.146419  133241 cri.go:89] found id: ""
	I1210 01:11:23.146450  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.146463  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:23.146470  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:23.146546  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:23.178747  133241 cri.go:89] found id: ""
	I1210 01:11:23.178774  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.178782  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:23.178792  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:23.178804  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:23.230574  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:23.230609  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:23.242622  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:23.242649  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:23.315830  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:23.315850  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:23.315862  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:23.394054  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:23.394091  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:22.790004  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:24.790395  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:26.790583  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:22.821008  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:25.321294  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:23.758460  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:26.257243  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:25.930799  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:25.943287  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:25.943351  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:25.975836  133241 cri.go:89] found id: ""
	I1210 01:11:25.975866  133241 logs.go:282] 0 containers: []
	W1210 01:11:25.975877  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:25.975884  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:25.975948  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:26.008518  133241 cri.go:89] found id: ""
	I1210 01:11:26.008545  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.008553  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:26.008560  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:26.008607  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:26.041953  133241 cri.go:89] found id: ""
	I1210 01:11:26.041992  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.042002  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:26.042009  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:26.042076  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:26.071782  133241 cri.go:89] found id: ""
	I1210 01:11:26.071809  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.071821  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:26.071829  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:26.071894  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:26.101051  133241 cri.go:89] found id: ""
	I1210 01:11:26.101075  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.101084  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:26.101089  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:26.101135  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:26.135274  133241 cri.go:89] found id: ""
	I1210 01:11:26.135300  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.135308  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:26.135315  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:26.135368  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:26.168190  133241 cri.go:89] found id: ""
	I1210 01:11:26.168216  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.168224  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:26.168230  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:26.168293  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:26.198453  133241 cri.go:89] found id: ""
	I1210 01:11:26.198482  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.198492  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:26.198505  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:26.198524  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:26.211436  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:26.211460  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:26.273940  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:26.273964  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:26.273980  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:26.353198  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:26.353232  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:26.389823  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:26.389857  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:28.940375  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:28.952619  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:28.952676  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:28.984886  133241 cri.go:89] found id: ""
	I1210 01:11:28.984914  133241 logs.go:282] 0 containers: []
	W1210 01:11:28.984923  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:28.984929  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:28.984978  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:29.015424  133241 cri.go:89] found id: ""
	I1210 01:11:29.015453  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.015463  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:29.015469  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:29.015520  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:29.045941  133241 cri.go:89] found id: ""
	I1210 01:11:29.045977  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.045989  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:29.045997  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:29.046065  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:29.077346  133241 cri.go:89] found id: ""
	I1210 01:11:29.077375  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.077384  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:29.077389  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:29.077442  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:29.109825  133241 cri.go:89] found id: ""
	I1210 01:11:29.109861  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.109873  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:29.109880  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:29.109946  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:29.141601  133241 cri.go:89] found id: ""
	I1210 01:11:29.141633  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.141645  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:29.141656  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:29.141720  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:29.172711  133241 cri.go:89] found id: ""
	I1210 01:11:29.172747  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.172758  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:29.172766  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:29.172830  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:29.205247  133241 cri.go:89] found id: ""
	I1210 01:11:29.205272  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.205283  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:29.205296  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:29.205310  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:29.255917  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:29.255954  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:29.269246  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:29.269276  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:29.339509  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:29.339535  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:29.339550  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:29.414320  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:29.414358  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:29.291191  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:31.790102  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:27.820810  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:30.321256  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:28.756034  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:30.757633  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:31.950667  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:31.963020  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:31.963083  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:31.994537  133241 cri.go:89] found id: ""
	I1210 01:11:31.994586  133241 logs.go:282] 0 containers: []
	W1210 01:11:31.994598  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:31.994606  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:31.994672  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:32.028601  133241 cri.go:89] found id: ""
	I1210 01:11:32.028632  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.028643  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:32.028651  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:32.028710  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:32.060238  133241 cri.go:89] found id: ""
	I1210 01:11:32.060265  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.060273  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:32.060280  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:32.060344  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:32.094421  133241 cri.go:89] found id: ""
	I1210 01:11:32.094446  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.094454  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:32.094460  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:32.094509  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:32.128237  133241 cri.go:89] found id: ""
	I1210 01:11:32.128266  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.128277  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:32.128285  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:32.128355  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:32.163139  133241 cri.go:89] found id: ""
	I1210 01:11:32.163163  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.163172  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:32.163179  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:32.163237  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:32.194077  133241 cri.go:89] found id: ""
	I1210 01:11:32.194108  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.194119  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:32.194126  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:32.194187  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:32.224914  133241 cri.go:89] found id: ""
	I1210 01:11:32.224941  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.224952  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:32.224964  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:32.224980  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:32.275194  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:32.275230  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:32.287642  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:32.287670  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:32.350922  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:32.350953  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:32.350971  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:32.431573  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:32.431610  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:33.790816  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:35.791330  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:32.321475  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:34.823056  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:33.256524  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:35.755851  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:34.969741  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:34.982487  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:34.982541  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:35.015370  133241 cri.go:89] found id: ""
	I1210 01:11:35.015408  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.015419  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:35.015428  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:35.015494  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:35.047381  133241 cri.go:89] found id: ""
	I1210 01:11:35.047418  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.047430  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:35.047437  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:35.047501  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:35.077282  133241 cri.go:89] found id: ""
	I1210 01:11:35.077305  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.077314  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:35.077320  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:35.077380  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:35.107625  133241 cri.go:89] found id: ""
	I1210 01:11:35.107653  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.107664  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:35.107671  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:35.107723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:35.137919  133241 cri.go:89] found id: ""
	I1210 01:11:35.137949  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.137962  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:35.137970  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:35.138037  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:35.170914  133241 cri.go:89] found id: ""
	I1210 01:11:35.170939  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.170947  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:35.170962  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:35.171021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:35.201719  133241 cri.go:89] found id: ""
	I1210 01:11:35.201747  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.201755  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:35.201761  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:35.201821  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:35.230544  133241 cri.go:89] found id: ""
	I1210 01:11:35.230582  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.230595  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:35.230607  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:35.230622  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:35.243184  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:35.243210  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:35.311888  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:35.311915  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:35.311931  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:35.387377  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:35.387411  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:35.424087  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:35.424121  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:37.977530  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:37.989741  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:37.989811  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:38.023765  133241 cri.go:89] found id: ""
	I1210 01:11:38.023789  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.023799  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:38.023808  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:38.023871  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:38.060456  133241 cri.go:89] found id: ""
	I1210 01:11:38.060487  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.060498  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:38.060505  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:38.060558  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:38.092589  133241 cri.go:89] found id: ""
	I1210 01:11:38.092612  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.092620  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:38.092626  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:38.092676  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:38.126075  133241 cri.go:89] found id: ""
	I1210 01:11:38.126115  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.126127  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:38.126137  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:38.126216  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:38.158861  133241 cri.go:89] found id: ""
	I1210 01:11:38.158892  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.158905  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:38.158911  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:38.158966  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:38.189136  133241 cri.go:89] found id: ""
	I1210 01:11:38.189164  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.189172  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:38.189178  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:38.189227  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:38.220497  133241 cri.go:89] found id: ""
	I1210 01:11:38.220522  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.220530  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:38.220536  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:38.220585  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:38.253480  133241 cri.go:89] found id: ""
	I1210 01:11:38.253515  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.253527  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:38.253539  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:38.253554  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:38.334967  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:38.335006  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:38.375521  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:38.375551  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:38.429375  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:38.429419  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:38.442488  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:38.442527  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:38.504243  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:38.290594  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:40.290705  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:37.322067  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:39.822004  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:37.756517  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:40.256112  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:42.256624  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:41.005015  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:41.018073  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:41.018149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:41.049377  133241 cri.go:89] found id: ""
	I1210 01:11:41.049409  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.049421  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:41.049429  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:41.049495  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:41.080430  133241 cri.go:89] found id: ""
	I1210 01:11:41.080466  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.080476  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:41.080482  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:41.080543  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:41.113179  133241 cri.go:89] found id: ""
	I1210 01:11:41.113210  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.113222  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:41.113229  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:41.113298  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:41.144493  133241 cri.go:89] found id: ""
	I1210 01:11:41.144523  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.144535  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:41.144545  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:41.144612  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:41.174786  133241 cri.go:89] found id: ""
	I1210 01:11:41.174818  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.174828  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:41.174835  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:41.174903  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:41.205010  133241 cri.go:89] found id: ""
	I1210 01:11:41.205050  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.205063  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:41.205072  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:41.205142  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:41.236095  133241 cri.go:89] found id: ""
	I1210 01:11:41.236120  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.236131  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:41.236138  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:41.236200  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:41.267610  133241 cri.go:89] found id: ""
	I1210 01:11:41.267639  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.267654  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:41.267665  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:41.267681  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:41.302639  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:41.302669  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:41.352311  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:41.352343  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:41.365111  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:41.365140  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:41.434174  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:41.434197  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:41.434214  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:44.018219  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:44.030886  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:44.030961  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:44.072932  133241 cri.go:89] found id: ""
	I1210 01:11:44.072954  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.072962  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:44.072968  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:44.073015  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:44.110425  133241 cri.go:89] found id: ""
	I1210 01:11:44.110456  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.110466  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:44.110473  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:44.110539  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:44.148811  133241 cri.go:89] found id: ""
	I1210 01:11:44.148837  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.148848  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:44.148855  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:44.148922  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:44.184181  133241 cri.go:89] found id: ""
	I1210 01:11:44.184205  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.184213  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:44.184219  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:44.184268  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:44.213545  133241 cri.go:89] found id: ""
	I1210 01:11:44.213578  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.213590  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:44.213597  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:44.213658  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:44.246979  133241 cri.go:89] found id: ""
	I1210 01:11:44.247012  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.247024  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:44.247032  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:44.247095  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:44.280902  133241 cri.go:89] found id: ""
	I1210 01:11:44.280939  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.280950  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:44.280958  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:44.281035  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:44.310824  133241 cri.go:89] found id: ""
	I1210 01:11:44.310848  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.310859  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:44.310870  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:44.310887  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:44.389324  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:44.389354  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:44.425351  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:44.425388  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:44.478151  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:44.478197  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:44.491139  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:44.491171  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:44.552150  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:42.790792  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:45.289730  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:42.321108  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:44.321367  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:46.820868  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:44.258348  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:46.756838  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:47.052917  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:47.065698  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:47.065764  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:47.098483  133241 cri.go:89] found id: ""
	I1210 01:11:47.098518  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.098530  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:47.098538  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:47.098617  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:47.129042  133241 cri.go:89] found id: ""
	I1210 01:11:47.129073  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.129082  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:47.129088  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:47.129157  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:47.160050  133241 cri.go:89] found id: ""
	I1210 01:11:47.160083  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.160094  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:47.160101  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:47.160167  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:47.190078  133241 cri.go:89] found id: ""
	I1210 01:11:47.190111  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.190120  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:47.190126  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:47.190180  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:47.218975  133241 cri.go:89] found id: ""
	I1210 01:11:47.219007  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.219020  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:47.219028  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:47.219088  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:47.248644  133241 cri.go:89] found id: ""
	I1210 01:11:47.248679  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.248689  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:47.248694  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:47.248743  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:47.284306  133241 cri.go:89] found id: ""
	I1210 01:11:47.284332  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.284339  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:47.284345  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:47.284394  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:47.314682  133241 cri.go:89] found id: ""
	I1210 01:11:47.314704  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.314712  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:47.314721  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:47.314733  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:47.365334  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:47.365364  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:47.378184  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:47.378215  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:47.445591  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:47.445619  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:47.445642  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:47.523176  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:47.523214  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:47.291212  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:49.790326  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:51.790425  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:48.821947  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:51.321998  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:49.255902  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:51.256638  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:50.059060  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:50.071413  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:50.071489  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:50.104600  133241 cri.go:89] found id: ""
	I1210 01:11:50.104632  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.104644  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:50.104652  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:50.104715  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:50.136915  133241 cri.go:89] found id: ""
	I1210 01:11:50.136947  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.136957  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:50.136968  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:50.137038  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:50.172552  133241 cri.go:89] found id: ""
	I1210 01:11:50.172582  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.172593  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:50.172604  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:50.172668  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:50.202583  133241 cri.go:89] found id: ""
	I1210 01:11:50.202613  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.202626  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:50.202634  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:50.202696  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:50.232446  133241 cri.go:89] found id: ""
	I1210 01:11:50.232473  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.232483  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:50.232491  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:50.232555  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:50.271296  133241 cri.go:89] found id: ""
	I1210 01:11:50.271321  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.271332  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:50.271340  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:50.271404  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:50.304185  133241 cri.go:89] found id: ""
	I1210 01:11:50.304216  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.304227  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:50.304235  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:50.304298  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:50.338004  133241 cri.go:89] found id: ""
	I1210 01:11:50.338030  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.338041  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:50.338051  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:50.338066  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:50.374374  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:50.374403  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:50.427315  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:50.427346  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:50.439862  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:50.439890  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:50.505410  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:50.505441  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:50.505458  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:53.081065  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:53.093760  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:53.093816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:53.126125  133241 cri.go:89] found id: ""
	I1210 01:11:53.126160  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.126172  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:53.126180  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:53.126252  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:53.157694  133241 cri.go:89] found id: ""
	I1210 01:11:53.157719  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.157727  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:53.157732  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:53.157787  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:53.188784  133241 cri.go:89] found id: ""
	I1210 01:11:53.188812  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.188820  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:53.188826  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:53.188882  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:53.220025  133241 cri.go:89] found id: ""
	I1210 01:11:53.220056  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.220066  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:53.220074  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:53.220133  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:53.254601  133241 cri.go:89] found id: ""
	I1210 01:11:53.254632  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.254641  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:53.254649  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:53.254718  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:53.286858  133241 cri.go:89] found id: ""
	I1210 01:11:53.286896  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.286906  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:53.286917  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:53.286979  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:53.322063  133241 cri.go:89] found id: ""
	I1210 01:11:53.322087  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.322096  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:53.322104  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:53.322175  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:53.353598  133241 cri.go:89] found id: ""
	I1210 01:11:53.353624  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.353632  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:53.353641  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:53.353653  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:53.400634  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:53.400660  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:53.412838  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:53.412870  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:53.475152  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:53.475176  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:53.475191  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:53.551193  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:53.551236  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:54.290077  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:56.290911  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:53.322201  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:55.821982  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:53.257982  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:55.756075  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:56.089703  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:56.102065  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:56.102158  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:56.137385  133241 cri.go:89] found id: ""
	I1210 01:11:56.137410  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.137418  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:56.137424  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:56.137489  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:56.173717  133241 cri.go:89] found id: ""
	I1210 01:11:56.173748  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.173756  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:56.173762  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:56.173823  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:56.209007  133241 cri.go:89] found id: ""
	I1210 01:11:56.209031  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.209038  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:56.209044  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:56.209106  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:56.247599  133241 cri.go:89] found id: ""
	I1210 01:11:56.247628  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.247636  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:56.247642  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:56.247701  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:56.279510  133241 cri.go:89] found id: ""
	I1210 01:11:56.279535  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.279544  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:56.279550  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:56.279600  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:56.311644  133241 cri.go:89] found id: ""
	I1210 01:11:56.311665  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.311672  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:56.311678  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:56.311722  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:56.343277  133241 cri.go:89] found id: ""
	I1210 01:11:56.343306  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.343317  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:56.343324  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:56.343384  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:56.396352  133241 cri.go:89] found id: ""
	I1210 01:11:56.396380  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.396388  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:56.396397  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:56.396409  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:56.408726  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:56.408754  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:56.483943  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:56.483970  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:56.483987  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:56.566841  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:56.566874  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:56.604048  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:56.604083  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:59.154979  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:59.167727  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:59.167803  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:59.198861  133241 cri.go:89] found id: ""
	I1210 01:11:59.198886  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.198894  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:59.198901  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:59.198953  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:59.232900  133241 cri.go:89] found id: ""
	I1210 01:11:59.232935  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.232947  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:59.232955  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:59.233024  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:59.267532  133241 cri.go:89] found id: ""
	I1210 01:11:59.267558  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.267566  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:59.267571  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:59.267633  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:59.298091  133241 cri.go:89] found id: ""
	I1210 01:11:59.298120  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.298130  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:59.298140  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:59.298199  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:59.327848  133241 cri.go:89] found id: ""
	I1210 01:11:59.327879  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.327889  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:59.327897  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:59.327957  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:59.356570  133241 cri.go:89] found id: ""
	I1210 01:11:59.356601  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.356617  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:59.356626  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:59.356686  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:59.387756  133241 cri.go:89] found id: ""
	I1210 01:11:59.387780  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.387788  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:59.387793  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:59.387843  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:59.419836  133241 cri.go:89] found id: ""
	I1210 01:11:59.419869  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.419878  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:59.419887  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:59.419902  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:59.469663  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:59.469697  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:59.482738  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:59.482768  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:59.548687  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:59.548717  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:59.548739  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:58.790282  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:01.290379  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:58.320794  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:00.821991  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:57.756197  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:00.256511  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:59.625772  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:59.625809  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:02.163527  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:02.175510  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:02.175569  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:02.209432  133241 cri.go:89] found id: ""
	I1210 01:12:02.209462  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.209474  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:02.209481  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:02.209535  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:02.241027  133241 cri.go:89] found id: ""
	I1210 01:12:02.241050  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.241059  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:02.241064  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:02.241113  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:02.272251  133241 cri.go:89] found id: ""
	I1210 01:12:02.272277  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.272286  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:02.272293  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:02.272355  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:02.305879  133241 cri.go:89] found id: ""
	I1210 01:12:02.305903  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.305913  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:02.305920  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:02.305978  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:02.339219  133241 cri.go:89] found id: ""
	I1210 01:12:02.339248  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.339263  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:02.339271  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:02.339333  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:02.375203  133241 cri.go:89] found id: ""
	I1210 01:12:02.375240  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.375252  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:02.375260  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:02.375326  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:02.406364  133241 cri.go:89] found id: ""
	I1210 01:12:02.406396  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.406406  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:02.406413  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:02.406472  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:02.441572  133241 cri.go:89] found id: ""
	I1210 01:12:02.441602  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.441614  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:02.441627  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:02.441642  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:02.454215  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:02.454241  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:02.526345  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:02.526368  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:02.526388  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:02.603813  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:02.603845  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:02.640102  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:02.640136  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:03.291135  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.792322  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:03.321084  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.322066  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:02.755961  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.256774  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.189319  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:05.201957  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:05.202022  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:05.242211  133241 cri.go:89] found id: ""
	I1210 01:12:05.242238  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.242247  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:05.242253  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:05.242300  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:05.277287  133241 cri.go:89] found id: ""
	I1210 01:12:05.277309  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.277317  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:05.277323  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:05.277382  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:05.309455  133241 cri.go:89] found id: ""
	I1210 01:12:05.309480  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.309488  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:05.309493  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:05.309540  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:05.344117  133241 cri.go:89] found id: ""
	I1210 01:12:05.344143  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.344156  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:05.344164  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:05.344222  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:05.375039  133241 cri.go:89] found id: ""
	I1210 01:12:05.375067  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.375079  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:05.375086  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:05.375146  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:05.407623  133241 cri.go:89] found id: ""
	I1210 01:12:05.407649  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.407657  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:05.407665  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:05.407723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:05.441018  133241 cri.go:89] found id: ""
	I1210 01:12:05.441047  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.441055  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:05.441061  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:05.441123  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:05.471864  133241 cri.go:89] found id: ""
	I1210 01:12:05.471895  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.471907  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:05.471918  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:05.471931  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:05.536855  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:05.536881  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:05.536896  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:05.617577  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:05.617612  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:05.654150  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:05.654188  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:05.707690  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:05.707720  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:08.220391  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:08.232904  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:08.232961  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:08.271892  133241 cri.go:89] found id: ""
	I1210 01:12:08.271921  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.271933  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:08.271939  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:08.272004  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:08.304534  133241 cri.go:89] found id: ""
	I1210 01:12:08.304556  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.304563  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:08.304569  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:08.304620  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:08.338410  133241 cri.go:89] found id: ""
	I1210 01:12:08.338441  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.338451  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:08.338459  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:08.338523  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:08.370412  133241 cri.go:89] found id: ""
	I1210 01:12:08.370438  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.370449  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:08.370456  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:08.370515  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:08.401137  133241 cri.go:89] found id: ""
	I1210 01:12:08.401161  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.401169  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:08.401175  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:08.401224  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:08.436185  133241 cri.go:89] found id: ""
	I1210 01:12:08.436220  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.436232  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:08.436241  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:08.436308  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:08.468648  133241 cri.go:89] found id: ""
	I1210 01:12:08.468677  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.468696  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:08.468704  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:08.468764  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:08.506817  133241 cri.go:89] found id: ""
	I1210 01:12:08.506852  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.506865  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:08.506878  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:08.506898  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:08.565209  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:08.565240  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:08.581630  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:08.581675  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:08.663163  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:08.663189  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:08.663201  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:08.744843  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:08.744888  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:08.290806  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:10.790414  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:07.821280  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:09.821710  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:07.755386  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:09.759064  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:12.256087  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:11.282449  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:11.295381  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:11.295443  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:11.328119  133241 cri.go:89] found id: ""
	I1210 01:12:11.328145  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.328156  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:11.328162  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:11.328215  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:11.360864  133241 cri.go:89] found id: ""
	I1210 01:12:11.360895  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.360906  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:11.360914  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:11.360979  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:11.394838  133241 cri.go:89] found id: ""
	I1210 01:12:11.394862  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.394871  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:11.394876  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:11.394928  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:11.424174  133241 cri.go:89] found id: ""
	I1210 01:12:11.424216  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.424228  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:11.424236  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:11.424295  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:11.455057  133241 cri.go:89] found id: ""
	I1210 01:12:11.455083  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.455095  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:11.455102  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:11.455173  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:11.485755  133241 cri.go:89] found id: ""
	I1210 01:12:11.485783  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.485791  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:11.485797  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:11.485850  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:11.516921  133241 cri.go:89] found id: ""
	I1210 01:12:11.516952  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.516963  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:11.516970  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:11.517029  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:11.547484  133241 cri.go:89] found id: ""
	I1210 01:12:11.547510  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.547518  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:11.547527  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:11.547540  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:11.582392  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:11.582419  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:11.635271  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:11.635306  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:11.647460  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:11.647492  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:11.713562  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:11.713584  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:11.713599  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:14.299112  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:14.314813  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:14.314886  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:14.365870  133241 cri.go:89] found id: ""
	I1210 01:12:14.365907  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.365925  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:14.365934  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:14.365998  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:14.399023  133241 cri.go:89] found id: ""
	I1210 01:12:14.399046  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.399054  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:14.399060  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:14.399106  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:14.432464  133241 cri.go:89] found id: ""
	I1210 01:12:14.432490  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.432498  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:14.432504  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:14.432559  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:14.462625  133241 cri.go:89] found id: ""
	I1210 01:12:14.462657  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.462668  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:14.462675  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:14.462723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:14.494853  133241 cri.go:89] found id: ""
	I1210 01:12:14.494884  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.494895  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:14.494903  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:14.494968  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:14.528863  133241 cri.go:89] found id: ""
	I1210 01:12:14.528898  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.528909  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:14.528917  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:14.528985  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:14.563527  133241 cri.go:89] found id: ""
	I1210 01:12:14.563557  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.563568  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:14.563575  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:14.563633  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:14.592383  133241 cri.go:89] found id: ""
	I1210 01:12:14.592419  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.592429  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:14.592440  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:14.592453  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:14.604471  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:14.604498  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:12:12.790681  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:15.289761  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:12.321375  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:14.321765  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:16.820568  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:14.256568  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:16.755323  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	W1210 01:12:14.671647  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:14.671673  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:14.671686  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:14.749619  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:14.749648  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:14.783668  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:14.783700  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:17.337203  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:17.349666  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:17.349726  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:17.380558  133241 cri.go:89] found id: ""
	I1210 01:12:17.380584  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.380595  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:17.380603  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:17.380663  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:17.413026  133241 cri.go:89] found id: ""
	I1210 01:12:17.413060  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.413072  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:17.413080  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:17.413149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:17.444972  133241 cri.go:89] found id: ""
	I1210 01:12:17.445003  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.445014  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:17.445022  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:17.445081  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:17.477555  133241 cri.go:89] found id: ""
	I1210 01:12:17.477580  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.477588  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:17.477594  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:17.477641  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:17.508550  133241 cri.go:89] found id: ""
	I1210 01:12:17.508574  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.508582  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:17.508588  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:17.508671  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:17.538537  133241 cri.go:89] found id: ""
	I1210 01:12:17.538574  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.538586  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:17.538594  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:17.538655  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:17.571816  133241 cri.go:89] found id: ""
	I1210 01:12:17.571843  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.571851  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:17.571859  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:17.571916  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:17.602437  133241 cri.go:89] found id: ""
	I1210 01:12:17.602465  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.602484  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:17.602502  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:17.602517  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:17.652904  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:17.652936  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:17.664983  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:17.665006  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:17.732580  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:17.732606  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:17.732622  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:17.813561  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:17.813598  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:17.290624  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:19.291031  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:21.790058  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:18.821021  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:20.821538  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:18.755611  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:20.756570  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:20.349846  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:20.361680  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:20.361816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:20.394316  133241 cri.go:89] found id: ""
	I1210 01:12:20.394338  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.394345  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:20.394350  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:20.394395  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:20.432172  133241 cri.go:89] found id: ""
	I1210 01:12:20.432196  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.432204  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:20.432209  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:20.432256  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:20.464019  133241 cri.go:89] found id: ""
	I1210 01:12:20.464042  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.464049  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:20.464055  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:20.464101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:20.496239  133241 cri.go:89] found id: ""
	I1210 01:12:20.496264  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.496271  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:20.496277  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:20.496325  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:20.527890  133241 cri.go:89] found id: ""
	I1210 01:12:20.527920  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.527932  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:20.527939  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:20.527996  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:20.558333  133241 cri.go:89] found id: ""
	I1210 01:12:20.558360  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.558368  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:20.558374  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:20.558425  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:20.589431  133241 cri.go:89] found id: ""
	I1210 01:12:20.589461  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.589472  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:20.589480  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:20.589542  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:20.618988  133241 cri.go:89] found id: ""
	I1210 01:12:20.619018  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.619032  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:20.619042  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:20.619056  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:20.669620  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:20.669648  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:20.681405  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:20.681428  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:20.745196  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:20.745226  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:20.745243  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:20.823522  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:20.823548  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:23.360499  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:23.373249  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:23.373315  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:23.405186  133241 cri.go:89] found id: ""
	I1210 01:12:23.405207  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.405215  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:23.405224  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:23.405269  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:23.440082  133241 cri.go:89] found id: ""
	I1210 01:12:23.440118  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.440138  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:23.440146  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:23.440217  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:23.473962  133241 cri.go:89] found id: ""
	I1210 01:12:23.473991  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.474001  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:23.474010  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:23.474066  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:23.505004  133241 cri.go:89] found id: ""
	I1210 01:12:23.505028  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.505036  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:23.505042  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:23.505087  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:23.539383  133241 cri.go:89] found id: ""
	I1210 01:12:23.539416  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.539427  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:23.539435  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:23.539502  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:23.569371  133241 cri.go:89] found id: ""
	I1210 01:12:23.569402  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.569412  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:23.569420  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:23.569487  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:23.599718  133241 cri.go:89] found id: ""
	I1210 01:12:23.599740  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.599748  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:23.599754  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:23.599798  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:23.633483  133241 cri.go:89] found id: ""
	I1210 01:12:23.633513  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.633527  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:23.633538  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:23.633572  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:23.645791  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:23.645814  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:23.706819  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:23.706842  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:23.706858  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:23.792257  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:23.792283  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:23.832356  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:23.832384  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:23.790991  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:26.289467  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:23.321221  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:25.321373  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:23.256427  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:25.256459  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:27.257652  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:26.383157  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:26.395778  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:26.395834  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:26.428709  133241 cri.go:89] found id: ""
	I1210 01:12:26.428738  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.428750  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:26.428758  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:26.428823  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:26.463421  133241 cri.go:89] found id: ""
	I1210 01:12:26.463451  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.463470  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:26.463479  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:26.463541  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:26.494783  133241 cri.go:89] found id: ""
	I1210 01:12:26.494813  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.494826  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:26.494834  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:26.494894  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:26.524395  133241 cri.go:89] found id: ""
	I1210 01:12:26.524423  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.524434  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:26.524442  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:26.524505  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:26.554102  133241 cri.go:89] found id: ""
	I1210 01:12:26.554135  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.554146  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:26.554153  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:26.554218  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:26.584091  133241 cri.go:89] found id: ""
	I1210 01:12:26.584119  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.584127  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:26.584133  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:26.584188  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:26.618194  133241 cri.go:89] found id: ""
	I1210 01:12:26.618221  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.618229  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:26.618234  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:26.618282  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:26.652597  133241 cri.go:89] found id: ""
	I1210 01:12:26.652632  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.652643  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:26.652657  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:26.652674  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:26.724236  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:26.724262  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:26.724277  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:26.802706  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:26.802745  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:26.851153  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:26.851184  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:26.902459  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:26.902489  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:29.415298  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:29.428093  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:29.428168  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:29.460651  133241 cri.go:89] found id: ""
	I1210 01:12:29.460678  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.460686  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:29.460692  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:29.460745  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:29.490971  133241 cri.go:89] found id: ""
	I1210 01:12:29.491000  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.491009  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:29.491015  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:29.491064  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:29.521465  133241 cri.go:89] found id: ""
	I1210 01:12:29.521496  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.521509  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:29.521517  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:29.521592  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:29.555709  133241 cri.go:89] found id: ""
	I1210 01:12:29.555736  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.555744  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:29.555750  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:29.555812  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:29.589891  133241 cri.go:89] found id: ""
	I1210 01:12:29.589918  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.589928  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:29.589935  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:29.590006  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:29.620929  133241 cri.go:89] found id: ""
	I1210 01:12:29.620959  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.620989  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:29.620998  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:29.621060  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:28.290708  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:30.290750  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:27.822436  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:30.320877  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:29.756698  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:31.756872  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:29.652297  133241 cri.go:89] found id: ""
	I1210 01:12:29.652322  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.652332  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:29.652339  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:29.652400  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:29.685881  133241 cri.go:89] found id: ""
	I1210 01:12:29.685904  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.685912  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:29.685922  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:29.685936  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:29.734856  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:29.734889  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:29.747270  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:29.747297  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:29.811253  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:29.811276  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:29.811292  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:29.888151  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:29.888187  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:32.425743  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:32.438647  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:32.438723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:32.477466  133241 cri.go:89] found id: ""
	I1210 01:12:32.477489  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.477498  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:32.477503  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:32.477553  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:32.509698  133241 cri.go:89] found id: ""
	I1210 01:12:32.509732  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.509746  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:32.509753  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:32.509811  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:32.540873  133241 cri.go:89] found id: ""
	I1210 01:12:32.540903  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.540911  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:32.540919  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:32.540981  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:32.571143  133241 cri.go:89] found id: ""
	I1210 01:12:32.571168  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.571179  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:32.571186  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:32.571253  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:32.604797  133241 cri.go:89] found id: ""
	I1210 01:12:32.604829  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.604839  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:32.604847  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:32.604902  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:32.640179  133241 cri.go:89] found id: ""
	I1210 01:12:32.640204  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.640212  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:32.640218  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:32.640265  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:32.671103  133241 cri.go:89] found id: ""
	I1210 01:12:32.671130  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.671138  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:32.671144  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:32.671195  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:32.709038  133241 cri.go:89] found id: ""
	I1210 01:12:32.709069  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.709080  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:32.709092  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:32.709107  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:32.764933  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:32.764963  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:32.777149  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:32.777172  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:32.842233  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:32.842256  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:32.842273  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:32.923533  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:32.923569  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:32.291302  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:34.790708  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:32.321782  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:34.821161  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:36.821244  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:34.256937  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:36.756894  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:35.462284  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:35.476392  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:35.476465  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:35.509483  133241 cri.go:89] found id: ""
	I1210 01:12:35.509507  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.509515  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:35.509521  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:35.509568  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:35.546324  133241 cri.go:89] found id: ""
	I1210 01:12:35.546357  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.546369  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:35.546385  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:35.546457  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:35.580578  133241 cri.go:89] found id: ""
	I1210 01:12:35.580608  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.580618  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:35.580626  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:35.580695  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:35.613220  133241 cri.go:89] found id: ""
	I1210 01:12:35.613245  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.613253  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:35.613259  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:35.613318  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:35.650713  133241 cri.go:89] found id: ""
	I1210 01:12:35.650741  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.650751  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:35.650757  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:35.650826  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:35.685084  133241 cri.go:89] found id: ""
	I1210 01:12:35.685121  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.685134  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:35.685141  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:35.685196  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:35.717092  133241 cri.go:89] found id: ""
	I1210 01:12:35.717118  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.717130  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:35.717141  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:35.717197  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:35.753691  133241 cri.go:89] found id: ""
	I1210 01:12:35.753722  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.753732  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:35.753751  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:35.753766  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:35.807280  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:35.807314  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:35.821862  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:35.821894  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:35.892640  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:35.892667  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:35.892684  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:35.967250  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:35.967291  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:38.505643  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:38.518703  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:38.518762  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:38.554866  133241 cri.go:89] found id: ""
	I1210 01:12:38.554904  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.554917  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:38.554926  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:38.554983  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:38.586725  133241 cri.go:89] found id: ""
	I1210 01:12:38.586757  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.586770  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:38.586779  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:38.586840  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:38.617766  133241 cri.go:89] found id: ""
	I1210 01:12:38.617791  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.617799  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:38.617804  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:38.617855  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:38.647743  133241 cri.go:89] found id: ""
	I1210 01:12:38.647770  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.647779  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:38.647785  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:38.647838  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:38.680523  133241 cri.go:89] found id: ""
	I1210 01:12:38.680553  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.680564  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:38.680572  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:38.680634  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:38.714271  133241 cri.go:89] found id: ""
	I1210 01:12:38.714299  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.714307  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:38.714314  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:38.714366  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:38.751180  133241 cri.go:89] found id: ""
	I1210 01:12:38.751213  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.751226  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:38.751235  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:38.751307  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:38.783754  133241 cri.go:89] found id: ""
	I1210 01:12:38.783778  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.783787  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:38.783796  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:38.783807  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:38.843285  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:38.843332  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:38.856901  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:38.856935  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:38.923720  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:38.923747  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:38.923764  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:39.002855  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:39.002898  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:37.290816  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:38.785325  132693 pod_ready.go:82] duration metric: took 4m0.000828619s for pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace to be "Ready" ...
	E1210 01:12:38.785348  132693 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 01:12:38.785371  132693 pod_ready.go:39] duration metric: took 4m7.530994938s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:12:38.785436  132693 kubeadm.go:597] duration metric: took 4m15.56153133s to restartPrimaryControlPlane
	W1210 01:12:38.785555  132693 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 01:12:38.785612  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:12:38.822192  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:41.321407  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:39.256018  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:41.256892  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:41.542152  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:41.556438  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:41.556517  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:41.587666  133241 cri.go:89] found id: ""
	I1210 01:12:41.587695  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.587706  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:41.587714  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:41.587772  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:41.620472  133241 cri.go:89] found id: ""
	I1210 01:12:41.620498  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.620506  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:41.620512  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:41.620568  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:41.653153  133241 cri.go:89] found id: ""
	I1210 01:12:41.653196  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.653209  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:41.653217  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:41.653275  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:41.685358  133241 cri.go:89] found id: ""
	I1210 01:12:41.685387  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.685395  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:41.685401  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:41.685459  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:41.715972  133241 cri.go:89] found id: ""
	I1210 01:12:41.715996  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.716004  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:41.716010  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:41.716058  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:41.750651  133241 cri.go:89] found id: ""
	I1210 01:12:41.750684  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.750695  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:41.750703  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:41.750781  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:41.788845  133241 cri.go:89] found id: ""
	I1210 01:12:41.788872  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.788882  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:41.788890  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:41.788953  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:41.821679  133241 cri.go:89] found id: ""
	I1210 01:12:41.821705  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.821716  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:41.821726  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:41.821741  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:41.873177  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:41.873207  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:41.885639  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:41.885663  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:41.954882  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:41.954906  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:41.954922  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:42.032868  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:42.032911  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:44.569896  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:44.582137  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:44.582239  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:44.613216  133241 cri.go:89] found id: ""
	I1210 01:12:44.613245  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.613255  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:44.613264  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:44.613326  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:43.820651  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:45.821203  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:43.755681  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:45.755860  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:44.642860  133241 cri.go:89] found id: ""
	I1210 01:12:44.642887  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.642897  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:44.642904  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:44.642961  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:44.675879  133241 cri.go:89] found id: ""
	I1210 01:12:44.675908  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.675920  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:44.675928  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:44.675992  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:44.705466  133241 cri.go:89] found id: ""
	I1210 01:12:44.705490  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.705499  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:44.705505  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:44.705552  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:44.740999  133241 cri.go:89] found id: ""
	I1210 01:12:44.741029  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.741038  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:44.741043  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:44.741101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:44.774933  133241 cri.go:89] found id: ""
	I1210 01:12:44.774963  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.774974  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:44.774981  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:44.775044  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:44.806061  133241 cri.go:89] found id: ""
	I1210 01:12:44.806085  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.806093  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:44.806100  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:44.806163  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:44.837759  133241 cri.go:89] found id: ""
	I1210 01:12:44.837781  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.837789  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:44.837797  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:44.837808  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:44.872830  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:44.872881  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:44.925476  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:44.925505  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:44.937814  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:44.937838  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:45.012002  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:45.012029  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:45.012046  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:47.589735  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:47.603668  133241 kubeadm.go:597] duration metric: took 4m3.306612686s to restartPrimaryControlPlane
	W1210 01:12:47.603739  133241 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 01:12:47.603761  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:12:48.154198  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:12:48.167608  133241 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:12:48.176803  133241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:12:48.185508  133241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:12:48.185527  133241 kubeadm.go:157] found existing configuration files:
	
	I1210 01:12:48.185572  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:12:48.193940  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:12:48.193992  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:12:48.202384  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:12:48.210626  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:12:48.210672  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:12:48.219377  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:12:48.227459  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:12:48.227493  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:12:48.235967  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:12:48.244142  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:12:48.244177  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:12:48.252961  133241 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:12:48.323011  133241 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 01:12:48.323104  133241 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:12:48.458259  133241 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:12:48.458424  133241 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:12:48.458536  133241 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 01:12:48.630626  133241 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:12:48.632393  133241 out.go:235]   - Generating certificates and keys ...
	I1210 01:12:48.632510  133241 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:12:48.632611  133241 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:12:48.633714  133241 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:12:48.633800  133241 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:12:48.633862  133241 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:12:48.633957  133241 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:12:48.634058  133241 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:12:48.634151  133241 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:12:48.634265  133241 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:12:48.634426  133241 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:12:48.634546  133241 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:12:48.634640  133241 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:12:48.756866  133241 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:12:48.885589  133241 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:12:49.551602  133241 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:12:49.667812  133241 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:12:49.683125  133241 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:12:49.684322  133241 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:12:49.684390  133241 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:12:49.830086  133241 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:12:48.322646  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:50.821218  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:47.756532  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:49.757416  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:52.256110  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:49.831618  133241 out.go:235]   - Booting up control plane ...
	I1210 01:12:49.831733  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:12:49.836164  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:12:49.837117  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:12:49.845538  133241 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:12:49.848331  133241 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 01:12:53.320607  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:55.321218  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:54.256922  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:56.755279  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:57.321409  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:59.321826  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:01.821159  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:58.757281  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:01.256065  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:05.297959  132693 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.512320802s)
	I1210 01:13:05.298031  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:13:05.321593  132693 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:13:05.334072  132693 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:13:05.346063  132693 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:13:05.346089  132693 kubeadm.go:157] found existing configuration files:
	
	I1210 01:13:05.346143  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:13:05.360019  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:13:05.360087  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:13:05.372583  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:13:05.384130  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:13:05.384188  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:13:05.392629  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:13:05.400642  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:13:05.400700  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:13:05.410803  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:13:05.419350  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:13:05.419390  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:13:05.429452  132693 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:13:05.481014  132693 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 01:13:05.481092  132693 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:13:05.597528  132693 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:13:05.597654  132693 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:13:05.597756  132693 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 01:13:05.612251  132693 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:13:05.613988  132693 out.go:235]   - Generating certificates and keys ...
	I1210 01:13:05.614052  132693 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:13:05.614111  132693 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:13:05.614207  132693 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:13:05.614297  132693 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:13:05.614409  132693 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:13:05.614477  132693 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:13:05.614568  132693 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:13:05.614645  132693 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:13:05.614739  132693 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:13:05.614860  132693 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:13:05.614923  132693 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:13:05.615007  132693 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:13:05.946241  132693 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:13:06.262996  132693 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 01:13:06.492684  132693 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:13:06.618787  132693 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:13:06.805590  132693 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:13:06.806311  132693 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:13:06.808813  132693 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:13:06.810481  132693 out.go:235]   - Booting up control plane ...
	I1210 01:13:06.810631  132693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:13:06.810746  132693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:13:06.810812  132693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:13:03.821406  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:05.821749  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:03.756325  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:06.257324  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:06.832919  132693 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:13:06.839052  132693 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:13:06.839096  132693 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:13:06.969474  132693 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 01:13:06.969623  132693 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 01:13:07.971413  132693 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001911774s
	I1210 01:13:07.971493  132693 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 01:13:07.822174  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:09.822828  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:12.473566  132693 kubeadm.go:310] [api-check] The API server is healthy after 4.502020736s
	I1210 01:13:12.487877  132693 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 01:13:12.501570  132693 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 01:13:12.529568  132693 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 01:13:12.529808  132693 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-274758 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 01:13:12.539578  132693 kubeadm.go:310] [bootstrap-token] Using token: tq1yzs.mz19z1mkmh869v39
	I1210 01:13:08.757580  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:11.256597  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:12.540687  132693 out.go:235]   - Configuring RBAC rules ...
	I1210 01:13:12.540830  132693 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 01:13:12.546018  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 01:13:12.554335  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 01:13:12.557480  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 01:13:12.562006  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 01:13:12.568058  132693 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 01:13:12.880502  132693 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 01:13:13.367386  132693 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 01:13:13.879413  132693 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 01:13:13.880417  132693 kubeadm.go:310] 
	I1210 01:13:13.880519  132693 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 01:13:13.880541  132693 kubeadm.go:310] 
	I1210 01:13:13.880619  132693 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 01:13:13.880629  132693 kubeadm.go:310] 
	I1210 01:13:13.880662  132693 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 01:13:13.880741  132693 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 01:13:13.880829  132693 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 01:13:13.880851  132693 kubeadm.go:310] 
	I1210 01:13:13.880930  132693 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 01:13:13.880943  132693 kubeadm.go:310] 
	I1210 01:13:13.881016  132693 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 01:13:13.881029  132693 kubeadm.go:310] 
	I1210 01:13:13.881114  132693 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 01:13:13.881255  132693 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 01:13:13.881326  132693 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 01:13:13.881334  132693 kubeadm.go:310] 
	I1210 01:13:13.881429  132693 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 01:13:13.881542  132693 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 01:13:13.881553  132693 kubeadm.go:310] 
	I1210 01:13:13.881680  132693 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tq1yzs.mz19z1mkmh869v39 \
	I1210 01:13:13.881815  132693 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 \
	I1210 01:13:13.881843  132693 kubeadm.go:310] 	--control-plane 
	I1210 01:13:13.881854  132693 kubeadm.go:310] 
	I1210 01:13:13.881973  132693 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 01:13:13.881982  132693 kubeadm.go:310] 
	I1210 01:13:13.882072  132693 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tq1yzs.mz19z1mkmh869v39 \
	I1210 01:13:13.882230  132693 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 
	I1210 01:13:13.883146  132693 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:13:13.883196  132693 cni.go:84] Creating CNI manager for ""
	I1210 01:13:13.883217  132693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:13:13.885371  132693 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:13:13.886543  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:13:13.897482  132693 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:13:13.915107  132693 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 01:13:13.915244  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:13.915242  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-274758 minikube.k8s.io/updated_at=2024_12_10T01_13_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=embed-certs-274758 minikube.k8s.io/primary=true
	I1210 01:13:13.928635  132693 ops.go:34] apiserver oom_adj: -16
	I1210 01:13:14.131983  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:14.633015  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:15.132113  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:15.632347  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:16.132367  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:16.632749  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:12.321479  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:14.321663  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:16.822120  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:13.756549  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:15.751204  133282 pod_ready.go:82] duration metric: took 4m0.000700419s for pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace to be "Ready" ...
	E1210 01:13:15.751234  133282 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 01:13:15.751259  133282 pod_ready.go:39] duration metric: took 4m6.019142998s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:15.751290  133282 kubeadm.go:597] duration metric: took 4m13.842336769s to restartPrimaryControlPlane
	W1210 01:13:15.751381  133282 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 01:13:15.751413  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:13:17.132359  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:17.632050  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:18.132263  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:18.225462  132693 kubeadm.go:1113] duration metric: took 4.310260508s to wait for elevateKubeSystemPrivileges
	I1210 01:13:18.225504  132693 kubeadm.go:394] duration metric: took 4m55.046897812s to StartCluster
	I1210 01:13:18.225547  132693 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:18.225650  132693 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:13:18.227523  132693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:18.227776  132693 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 01:13:18.227852  132693 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 01:13:18.227928  132693 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-274758"
	I1210 01:13:18.227962  132693 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-274758"
	I1210 01:13:18.227961  132693 addons.go:69] Setting default-storageclass=true in profile "embed-certs-274758"
	I1210 01:13:18.227999  132693 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-274758"
	I1210 01:13:18.228012  132693 config.go:182] Loaded profile config "embed-certs-274758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1210 01:13:18.227973  132693 addons.go:243] addon storage-provisioner should already be in state true
	I1210 01:13:18.227983  132693 addons.go:69] Setting metrics-server=true in profile "embed-certs-274758"
	I1210 01:13:18.228079  132693 addons.go:234] Setting addon metrics-server=true in "embed-certs-274758"
	W1210 01:13:18.228096  132693 addons.go:243] addon metrics-server should already be in state true
	I1210 01:13:18.228130  132693 host.go:66] Checking if "embed-certs-274758" exists ...
	I1210 01:13:18.228085  132693 host.go:66] Checking if "embed-certs-274758" exists ...
	I1210 01:13:18.228468  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.228508  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.228521  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.228554  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.228608  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.228660  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.229260  132693 out.go:177] * Verifying Kubernetes components...
	I1210 01:13:18.230643  132693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:13:18.244916  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45913
	I1210 01:13:18.245098  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I1210 01:13:18.245389  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.245571  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.246186  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.246210  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.246288  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43885
	I1210 01:13:18.246344  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.246364  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.246598  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.246769  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.246771  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.246825  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.247215  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.247242  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.247367  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.247418  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.247638  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.248206  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.248244  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.250542  132693 addons.go:234] Setting addon default-storageclass=true in "embed-certs-274758"
	W1210 01:13:18.250579  132693 addons.go:243] addon default-storageclass should already be in state true
	I1210 01:13:18.250614  132693 host.go:66] Checking if "embed-certs-274758" exists ...
	I1210 01:13:18.250951  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.250999  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.265194  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36375
	I1210 01:13:18.265779  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43577
	I1210 01:13:18.266283  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.266478  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.267212  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.267234  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.267302  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40845
	I1210 01:13:18.267316  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.267329  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.267647  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.267700  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.268228  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.268248  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.268250  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.268276  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.268319  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.268679  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.268889  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.269065  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.271273  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:13:18.271495  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:13:18.272879  132693 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:13:18.272898  132693 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 01:13:18.274238  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 01:13:18.274260  132693 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 01:13:18.274279  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:13:18.274371  132693 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:18.274394  132693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 01:13:18.274411  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:13:18.278685  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.279199  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:13:18.279245  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.279405  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:13:18.279557  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:13:18.279684  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:13:18.279823  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:13:18.280345  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.281064  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:13:18.281083  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:13:18.281095  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.281282  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:13:18.281455  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:13:18.281643  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:13:18.285915  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36787
	I1210 01:13:18.286306  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.286727  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.286745  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.287055  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.287234  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.288732  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:13:18.288930  132693 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:18.288945  132693 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 01:13:18.288962  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:13:18.291528  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.291801  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:13:18.291821  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.291990  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:13:18.292175  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:13:18.292303  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:13:18.292532  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:13:18.426704  132693 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:13:18.454857  132693 node_ready.go:35] waiting up to 6m0s for node "embed-certs-274758" to be "Ready" ...
	I1210 01:13:18.470552  132693 node_ready.go:49] node "embed-certs-274758" has status "Ready":"True"
	I1210 01:13:18.470590  132693 node_ready.go:38] duration metric: took 15.702625ms for node "embed-certs-274758" to be "Ready" ...
	I1210 01:13:18.470604  132693 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:18.480748  132693 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bgjgh" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:18.569014  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 01:13:18.569040  132693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 01:13:18.605108  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 01:13:18.605137  132693 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 01:13:18.606158  132693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:18.614827  132693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:18.647542  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:18.647573  132693 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 01:13:18.726060  132693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:19.536876  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.536905  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.536988  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.537020  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.537177  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537215  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.537223  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.537234  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.537239  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537252  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.537261  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.537269  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.537324  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.537524  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537623  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.537922  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.537957  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537981  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.556234  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.556255  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.556555  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.556567  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.556572  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.977786  132693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.251679295s)
	I1210 01:13:19.977848  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.977861  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.978210  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.978227  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.978253  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.978288  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.978297  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.978536  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.978557  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.978581  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.978593  132693 addons.go:475] Verifying addon metrics-server=true in "embed-certs-274758"
	I1210 01:13:19.980096  132693 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 01:13:19.981147  132693 addons.go:510] duration metric: took 1.753302974s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 01:13:20.487221  132693 pod_ready.go:93] pod "coredns-7c65d6cfc9-bgjgh" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:20.487244  132693 pod_ready.go:82] duration metric: took 2.006464893s for pod "coredns-7c65d6cfc9-bgjgh" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:20.487253  132693 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:18.822687  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:21.322845  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:22.493358  132693 pod_ready.go:103] pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:24.993203  132693 pod_ready.go:103] pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:25.492646  132693 pod_ready.go:93] pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.492669  132693 pod_ready.go:82] duration metric: took 5.005410057s for pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.492679  132693 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.497102  132693 pod_ready.go:93] pod "etcd-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.497119  132693 pod_ready.go:82] duration metric: took 4.434391ms for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.497126  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.501166  132693 pod_ready.go:93] pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.501181  132693 pod_ready.go:82] duration metric: took 4.048875ms for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.501189  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.505541  132693 pod_ready.go:93] pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.505565  132693 pod_ready.go:82] duration metric: took 4.369889ms for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.505579  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-v28mz" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.509548  132693 pod_ready.go:93] pod "kube-proxy-v28mz" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.509562  132693 pod_ready.go:82] duration metric: took 3.977138ms for pod "kube-proxy-v28mz" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.509568  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:23.322966  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:25.820854  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:27.517005  132693 pod_ready.go:93] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:27.517027  132693 pod_ready.go:82] duration metric: took 2.007452032s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:27.517035  132693 pod_ready.go:39] duration metric: took 9.046411107s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:27.517052  132693 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:13:27.517101  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:13:27.531721  132693 api_server.go:72] duration metric: took 9.303907779s to wait for apiserver process to appear ...
	I1210 01:13:27.531750  132693 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:13:27.531768  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:13:27.536509  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 200:
	ok
	I1210 01:13:27.537428  132693 api_server.go:141] control plane version: v1.31.2
	I1210 01:13:27.537448  132693 api_server.go:131] duration metric: took 5.691563ms to wait for apiserver health ...
	I1210 01:13:27.537462  132693 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:13:27.693218  132693 system_pods.go:59] 9 kube-system pods found
	I1210 01:13:27.693251  132693 system_pods.go:61] "coredns-7c65d6cfc9-bgjgh" [277d23ef-ff20-414d-beb6-c6982712a423] Running
	I1210 01:13:27.693257  132693 system_pods.go:61] "coredns-7c65d6cfc9-m4qgb" [41253d1b-c010-41e2-9286-e9930025e9ff] Running
	I1210 01:13:27.693265  132693 system_pods.go:61] "etcd-embed-certs-274758" [b0c51f0d-a638-4f4f-b3df-95c7794facc9] Running
	I1210 01:13:27.693269  132693 system_pods.go:61] "kube-apiserver-embed-certs-274758" [ecdc5ff5-4d07-4802-98b8-8382132f7748] Running
	I1210 01:13:27.693273  132693 system_pods.go:61] "kube-controller-manager-embed-certs-274758" [e42fce81-4a93-498b-8897-31387873d181] Running
	I1210 01:13:27.693276  132693 system_pods.go:61] "kube-proxy-v28mz" [5cd47cc1-a085-4e77-850d-dde0c8ed6054] Running
	I1210 01:13:27.693279  132693 system_pods.go:61] "kube-scheduler-embed-certs-274758" [609e1329-cd71-4772-96cc-24b3620c511d] Running
	I1210 01:13:27.693285  132693 system_pods.go:61] "metrics-server-6867b74b74-mcw2c" [a7b75933-124c-4577-b26a-ad1c5c128910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:13:27.693289  132693 system_pods.go:61] "storage-provisioner" [71e4d38f-b0fe-43cf-a844-ba787287fda6] Running
	I1210 01:13:27.693296  132693 system_pods.go:74] duration metric: took 155.828167ms to wait for pod list to return data ...
	I1210 01:13:27.693305  132693 default_sa.go:34] waiting for default service account to be created ...
	I1210 01:13:27.891018  132693 default_sa.go:45] found service account: "default"
	I1210 01:13:27.891046  132693 default_sa.go:55] duration metric: took 197.731166ms for default service account to be created ...
	I1210 01:13:27.891055  132693 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 01:13:28.095967  132693 system_pods.go:86] 9 kube-system pods found
	I1210 01:13:28.095996  132693 system_pods.go:89] "coredns-7c65d6cfc9-bgjgh" [277d23ef-ff20-414d-beb6-c6982712a423] Running
	I1210 01:13:28.096002  132693 system_pods.go:89] "coredns-7c65d6cfc9-m4qgb" [41253d1b-c010-41e2-9286-e9930025e9ff] Running
	I1210 01:13:28.096006  132693 system_pods.go:89] "etcd-embed-certs-274758" [b0c51f0d-a638-4f4f-b3df-95c7794facc9] Running
	I1210 01:13:28.096010  132693 system_pods.go:89] "kube-apiserver-embed-certs-274758" [ecdc5ff5-4d07-4802-98b8-8382132f7748] Running
	I1210 01:13:28.096014  132693 system_pods.go:89] "kube-controller-manager-embed-certs-274758" [e42fce81-4a93-498b-8897-31387873d181] Running
	I1210 01:13:28.096017  132693 system_pods.go:89] "kube-proxy-v28mz" [5cd47cc1-a085-4e77-850d-dde0c8ed6054] Running
	I1210 01:13:28.096021  132693 system_pods.go:89] "kube-scheduler-embed-certs-274758" [609e1329-cd71-4772-96cc-24b3620c511d] Running
	I1210 01:13:28.096027  132693 system_pods.go:89] "metrics-server-6867b74b74-mcw2c" [a7b75933-124c-4577-b26a-ad1c5c128910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:13:28.096031  132693 system_pods.go:89] "storage-provisioner" [71e4d38f-b0fe-43cf-a844-ba787287fda6] Running
	I1210 01:13:28.096039  132693 system_pods.go:126] duration metric: took 204.97831ms to wait for k8s-apps to be running ...
	I1210 01:13:28.096047  132693 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 01:13:28.096091  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:13:28.109766  132693 system_svc.go:56] duration metric: took 13.710817ms WaitForService to wait for kubelet
	I1210 01:13:28.109807  132693 kubeadm.go:582] duration metric: took 9.881998931s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:13:28.109831  132693 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:13:28.290402  132693 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:13:28.290444  132693 node_conditions.go:123] node cpu capacity is 2
	I1210 01:13:28.290457  132693 node_conditions.go:105] duration metric: took 180.620817ms to run NodePressure ...
	I1210 01:13:28.290472  132693 start.go:241] waiting for startup goroutines ...
	I1210 01:13:28.290478  132693 start.go:246] waiting for cluster config update ...
	I1210 01:13:28.290489  132693 start.go:255] writing updated cluster config ...
	I1210 01:13:28.290756  132693 ssh_runner.go:195] Run: rm -f paused
	I1210 01:13:28.341573  132693 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 01:13:28.343695  132693 out.go:177] * Done! kubectl is now configured to use "embed-certs-274758" cluster and "default" namespace by default
	I1210 01:13:28.321957  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:30.821091  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:29.849672  133241 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 01:13:29.850163  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:13:29.850412  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:13:33.322460  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:35.822120  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:34.850843  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:13:34.851064  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:13:38.321590  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:40.322421  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:41.903973  133282 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.152536348s)
	I1210 01:13:41.904058  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:13:41.922104  133282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:13:41.932781  133282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:13:41.949147  133282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:13:41.949169  133282 kubeadm.go:157] found existing configuration files:
	
	I1210 01:13:41.949234  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 01:13:41.961475  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:13:41.961531  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:13:41.973790  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 01:13:41.985658  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:13:41.985718  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:13:41.996851  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 01:13:42.005612  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:13:42.005661  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:13:42.016316  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 01:13:42.025097  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:13:42.025162  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:13:42.035841  133282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:13:42.204343  133282 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:13:42.820637  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:44.821863  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:46.822010  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:44.851525  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:13:44.851699  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:13:50.610797  133282 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 01:13:50.610879  133282 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:13:50.610976  133282 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:13:50.611138  133282 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:13:50.611235  133282 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 01:13:50.611363  133282 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:13:50.612870  133282 out.go:235]   - Generating certificates and keys ...
	I1210 01:13:50.612937  133282 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:13:50.612990  133282 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:13:50.613065  133282 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:13:50.613142  133282 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:13:50.613213  133282 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:13:50.613291  133282 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:13:50.613383  133282 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:13:50.613468  133282 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:13:50.613583  133282 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:13:50.613711  133282 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:13:50.613784  133282 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:13:50.613871  133282 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:13:50.613951  133282 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:13:50.614035  133282 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 01:13:50.614113  133282 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:13:50.614231  133282 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:13:50.614318  133282 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:13:50.614396  133282 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:13:50.614483  133282 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:13:50.615840  133282 out.go:235]   - Booting up control plane ...
	I1210 01:13:50.615917  133282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:13:50.615985  133282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:13:50.616068  133282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:13:50.616186  133282 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:13:50.616283  133282 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:13:50.616354  133282 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:13:50.616529  133282 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 01:13:50.616677  133282 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 01:13:50.616752  133282 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002388771s
	I1210 01:13:50.616858  133282 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 01:13:50.616942  133282 kubeadm.go:310] [api-check] The API server is healthy after 4.501731998s
	I1210 01:13:50.617063  133282 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 01:13:50.617214  133282 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 01:13:50.617302  133282 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 01:13:50.617556  133282 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-901295 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 01:13:50.617633  133282 kubeadm.go:310] [bootstrap-token] Using token: qm0b8q.vohlzpntqihfsj2x
	I1210 01:13:50.618774  133282 out.go:235]   - Configuring RBAC rules ...
	I1210 01:13:50.618896  133282 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 01:13:50.619001  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 01:13:50.619167  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 01:13:50.619286  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 01:13:50.619432  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 01:13:50.619563  133282 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 01:13:50.619724  133282 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 01:13:50.619788  133282 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 01:13:50.619855  133282 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 01:13:50.619865  133282 kubeadm.go:310] 
	I1210 01:13:50.619958  133282 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 01:13:50.619970  133282 kubeadm.go:310] 
	I1210 01:13:50.620071  133282 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 01:13:50.620084  133282 kubeadm.go:310] 
	I1210 01:13:50.620133  133282 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 01:13:50.620214  133282 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 01:13:50.620290  133282 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 01:13:50.620299  133282 kubeadm.go:310] 
	I1210 01:13:50.620384  133282 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 01:13:50.620393  133282 kubeadm.go:310] 
	I1210 01:13:50.620464  133282 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 01:13:50.620480  133282 kubeadm.go:310] 
	I1210 01:13:50.620554  133282 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 01:13:50.620656  133282 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 01:13:50.620747  133282 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 01:13:50.620756  133282 kubeadm.go:310] 
	I1210 01:13:50.620862  133282 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 01:13:50.620978  133282 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 01:13:50.620994  133282 kubeadm.go:310] 
	I1210 01:13:50.621111  133282 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token qm0b8q.vohlzpntqihfsj2x \
	I1210 01:13:50.621255  133282 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 \
	I1210 01:13:50.621286  133282 kubeadm.go:310] 	--control-plane 
	I1210 01:13:50.621296  133282 kubeadm.go:310] 
	I1210 01:13:50.621365  133282 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 01:13:50.621374  133282 kubeadm.go:310] 
	I1210 01:13:50.621448  133282 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token qm0b8q.vohlzpntqihfsj2x \
	I1210 01:13:50.621569  133282 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 
	I1210 01:13:50.621593  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:13:50.621608  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:13:50.622943  133282 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:13:49.321854  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:51.815742  132605 pod_ready.go:82] duration metric: took 4m0.000382174s for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	E1210 01:13:51.815774  132605 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1210 01:13:51.815787  132605 pod_ready.go:39] duration metric: took 4m2.800798949s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:51.815811  132605 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:13:51.815854  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:13:51.815920  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:13:51.865972  132605 cri.go:89] found id: "0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:51.866004  132605 cri.go:89] found id: ""
	I1210 01:13:51.866015  132605 logs.go:282] 1 containers: [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c]
	I1210 01:13:51.866098  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.871589  132605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:13:51.871648  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:13:51.909231  132605 cri.go:89] found id: "bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:51.909256  132605 cri.go:89] found id: ""
	I1210 01:13:51.909266  132605 logs.go:282] 1 containers: [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490]
	I1210 01:13:51.909321  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.913562  132605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:13:51.913639  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:13:51.946623  132605 cri.go:89] found id: "7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:51.946651  132605 cri.go:89] found id: ""
	I1210 01:13:51.946661  132605 logs.go:282] 1 containers: [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04]
	I1210 01:13:51.946721  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.950686  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:13:51.950756  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:13:51.988821  132605 cri.go:89] found id: "c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:51.988845  132605 cri.go:89] found id: ""
	I1210 01:13:51.988856  132605 logs.go:282] 1 containers: [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692]
	I1210 01:13:51.988916  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.992776  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:13:51.992827  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:13:52.028882  132605 cri.go:89] found id: "eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:52.028910  132605 cri.go:89] found id: ""
	I1210 01:13:52.028920  132605 logs.go:282] 1 containers: [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77]
	I1210 01:13:52.028974  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.033384  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:13:52.033467  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:13:52.068002  132605 cri.go:89] found id: "7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:52.068030  132605 cri.go:89] found id: ""
	I1210 01:13:52.068038  132605 logs.go:282] 1 containers: [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f]
	I1210 01:13:52.068086  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.071868  132605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:13:52.071938  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:13:52.105726  132605 cri.go:89] found id: ""
	I1210 01:13:52.105751  132605 logs.go:282] 0 containers: []
	W1210 01:13:52.105760  132605 logs.go:284] No container was found matching "kindnet"
	I1210 01:13:52.105767  132605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 01:13:52.105822  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 01:13:52.146662  132605 cri.go:89] found id: "8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:52.146690  132605 cri.go:89] found id: "abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:52.146696  132605 cri.go:89] found id: ""
	I1210 01:13:52.146706  132605 logs.go:282] 2 containers: [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0]
	I1210 01:13:52.146769  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.150459  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.153921  132605 logs.go:123] Gathering logs for kube-proxy [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77] ...
	I1210 01:13:52.153942  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:52.197327  132605 logs.go:123] Gathering logs for kube-controller-manager [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f] ...
	I1210 01:13:52.197354  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:50.624049  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:13:50.634300  133282 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:13:50.650835  133282 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 01:13:50.650955  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:50.650957  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-901295 minikube.k8s.io/updated_at=2024_12_10T01_13_50_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=default-k8s-diff-port-901295 minikube.k8s.io/primary=true
	I1210 01:13:50.661855  133282 ops.go:34] apiserver oom_adj: -16
	I1210 01:13:50.846244  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:51.347288  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:51.846690  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:52.346721  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:52.846891  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:53.346360  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:53.846284  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:54.346480  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:54.846394  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:54.950848  133282 kubeadm.go:1113] duration metric: took 4.299939675s to wait for elevateKubeSystemPrivileges
	I1210 01:13:54.950893  133282 kubeadm.go:394] duration metric: took 4m53.095365109s to StartCluster
	I1210 01:13:54.950920  133282 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:54.951018  133282 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:13:54.952642  133282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:54.952903  133282 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 01:13:54.953028  133282 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 01:13:54.953103  133282 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-901295"
	I1210 01:13:54.953122  133282 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-901295"
	W1210 01:13:54.953130  133282 addons.go:243] addon storage-provisioner should already be in state true
	I1210 01:13:54.953144  133282 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-901295"
	I1210 01:13:54.953165  133282 host.go:66] Checking if "default-k8s-diff-port-901295" exists ...
	I1210 01:13:54.953165  133282 config.go:182] Loaded profile config "default-k8s-diff-port-901295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:13:54.953164  133282 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-901295"
	I1210 01:13:54.953175  133282 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-901295"
	I1210 01:13:54.953188  133282 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-901295"
	W1210 01:13:54.953197  133282 addons.go:243] addon metrics-server should already be in state true
	I1210 01:13:54.953236  133282 host.go:66] Checking if "default-k8s-diff-port-901295" exists ...
	I1210 01:13:54.953502  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.953544  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.953604  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.953648  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.953611  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.953720  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.954470  133282 out.go:177] * Verifying Kubernetes components...
	I1210 01:13:54.955825  133282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:13:54.969471  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43591
	I1210 01:13:54.969539  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42601
	I1210 01:13:54.969905  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.969971  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.970407  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.970427  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.970539  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.970606  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.970834  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.970902  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.971282  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.971314  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.971457  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.971503  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.971615  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33149
	I1210 01:13:54.971975  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.972424  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.972451  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.972757  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.972939  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:54.976290  133282 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-901295"
	W1210 01:13:54.976313  133282 addons.go:243] addon default-storageclass should already be in state true
	I1210 01:13:54.976344  133282 host.go:66] Checking if "default-k8s-diff-port-901295" exists ...
	I1210 01:13:54.976701  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.976743  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.987931  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35367
	I1210 01:13:54.988409  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.988950  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.988975  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.989395  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.989602  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:54.990179  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35279
	I1210 01:13:54.990660  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.991231  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.991256  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.991553  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:13:54.991804  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.991988  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:54.993375  133282 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 01:13:54.993895  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:13:54.993895  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38115
	I1210 01:13:54.994363  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.994661  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 01:13:54.994675  133282 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 01:13:54.994690  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:13:54.994864  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.994882  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.995298  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.995379  133282 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:13:54.995834  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.995881  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.996682  133282 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:54.996704  133282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 01:13:54.996721  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:13:55.000015  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000319  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:13:55.000343  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000361  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000321  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:13:55.000540  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:13:55.000637  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:13:55.000658  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000689  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:13:55.000819  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:13:55.000955  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:13:55.001529  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:13:55.001896  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:13:55.002167  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:13:55.013310  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45905
	I1210 01:13:55.013700  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:55.014199  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:55.014219  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:55.014556  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:55.014997  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:55.016445  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:13:55.016626  133282 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:55.016642  133282 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 01:13:55.016659  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:13:55.018941  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.019337  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:13:55.019358  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.019578  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:13:55.019718  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:13:55.019807  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:13:55.019887  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:13:55.152197  133282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:13:55.175962  133282 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-901295" to be "Ready" ...
	I1210 01:13:55.185748  133282 node_ready.go:49] node "default-k8s-diff-port-901295" has status "Ready":"True"
	I1210 01:13:55.185767  133282 node_ready.go:38] duration metric: took 9.765238ms for node "default-k8s-diff-port-901295" to be "Ready" ...
	I1210 01:13:55.185776  133282 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:55.193102  133282 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:55.268186  133282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:55.294420  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 01:13:55.294451  133282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 01:13:55.326324  133282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:55.338979  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 01:13:55.339009  133282 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 01:13:55.393682  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:55.393713  133282 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 01:13:55.482637  133282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:56.131482  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.131574  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.131524  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.131650  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.132095  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Closing plugin on server side
	I1210 01:13:56.132112  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132129  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.132133  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132138  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.132140  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.132148  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.132149  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.132207  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.132384  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132397  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.132501  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Closing plugin on server side
	I1210 01:13:56.132565  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132579  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.155188  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.155211  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.155515  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.155535  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.795811  133282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.313113399s)
	I1210 01:13:56.795879  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.795895  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.796326  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Closing plugin on server side
	I1210 01:13:56.796327  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.796353  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.796367  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.796379  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.796612  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.796628  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.796641  133282 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-901295"
	I1210 01:13:56.798189  133282 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 01:13:52.256305  132605 logs.go:123] Gathering logs for dmesg ...
	I1210 01:13:52.256333  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:13:52.269263  132605 logs.go:123] Gathering logs for kube-apiserver [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c] ...
	I1210 01:13:52.269288  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:52.310821  132605 logs.go:123] Gathering logs for coredns [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04] ...
	I1210 01:13:52.310855  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:52.348176  132605 logs.go:123] Gathering logs for kube-scheduler [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692] ...
	I1210 01:13:52.348204  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:52.399357  132605 logs.go:123] Gathering logs for storage-provisioner [abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0] ...
	I1210 01:13:52.399392  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:52.436240  132605 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:13:52.436272  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:13:52.962153  132605 logs.go:123] Gathering logs for container status ...
	I1210 01:13:52.962192  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:13:53.010091  132605 logs.go:123] Gathering logs for kubelet ...
	I1210 01:13:53.010127  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:13:53.082183  132605 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:13:53.082218  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:13:53.201521  132605 logs.go:123] Gathering logs for etcd [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490] ...
	I1210 01:13:53.201557  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:53.243675  132605 logs.go:123] Gathering logs for storage-provisioner [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6] ...
	I1210 01:13:53.243711  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:55.779907  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:13:55.796284  132605 api_server.go:72] duration metric: took 4m14.500959712s to wait for apiserver process to appear ...
	I1210 01:13:55.796314  132605 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:13:55.796358  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:13:55.796431  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:13:55.839067  132605 cri.go:89] found id: "0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:55.839098  132605 cri.go:89] found id: ""
	I1210 01:13:55.839107  132605 logs.go:282] 1 containers: [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c]
	I1210 01:13:55.839175  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.843310  132605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:13:55.843382  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:13:55.875863  132605 cri.go:89] found id: "bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:55.875888  132605 cri.go:89] found id: ""
	I1210 01:13:55.875896  132605 logs.go:282] 1 containers: [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490]
	I1210 01:13:55.875960  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.879748  132605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:13:55.879819  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:13:55.911243  132605 cri.go:89] found id: "7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:55.911269  132605 cri.go:89] found id: ""
	I1210 01:13:55.911279  132605 logs.go:282] 1 containers: [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04]
	I1210 01:13:55.911342  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.915201  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:13:55.915268  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:13:55.966280  132605 cri.go:89] found id: "c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:55.966308  132605 cri.go:89] found id: ""
	I1210 01:13:55.966318  132605 logs.go:282] 1 containers: [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692]
	I1210 01:13:55.966384  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.970278  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:13:55.970354  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:13:56.004675  132605 cri.go:89] found id: "eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:56.004706  132605 cri.go:89] found id: ""
	I1210 01:13:56.004722  132605 logs.go:282] 1 containers: [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77]
	I1210 01:13:56.004785  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.008534  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:13:56.008614  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:13:56.051252  132605 cri.go:89] found id: "7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:56.051282  132605 cri.go:89] found id: ""
	I1210 01:13:56.051293  132605 logs.go:282] 1 containers: [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f]
	I1210 01:13:56.051356  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.055160  132605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:13:56.055243  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:13:56.100629  132605 cri.go:89] found id: ""
	I1210 01:13:56.100660  132605 logs.go:282] 0 containers: []
	W1210 01:13:56.100672  132605 logs.go:284] No container was found matching "kindnet"
	I1210 01:13:56.100681  132605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 01:13:56.100749  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 01:13:56.140250  132605 cri.go:89] found id: "8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:56.140274  132605 cri.go:89] found id: "abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:56.140280  132605 cri.go:89] found id: ""
	I1210 01:13:56.140290  132605 logs.go:282] 2 containers: [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0]
	I1210 01:13:56.140352  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.145225  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.150128  132605 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:13:56.150151  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:13:56.273696  132605 logs.go:123] Gathering logs for kube-apiserver [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c] ...
	I1210 01:13:56.273730  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:56.323851  132605 logs.go:123] Gathering logs for etcd [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490] ...
	I1210 01:13:56.323884  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:56.375726  132605 logs.go:123] Gathering logs for kube-scheduler [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692] ...
	I1210 01:13:56.375763  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:56.430544  132605 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:13:56.430587  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:13:56.866412  132605 logs.go:123] Gathering logs for storage-provisioner [abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0] ...
	I1210 01:13:56.866505  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:56.901321  132605 logs.go:123] Gathering logs for container status ...
	I1210 01:13:56.901360  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:13:56.940068  132605 logs.go:123] Gathering logs for kubelet ...
	I1210 01:13:56.940107  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:13:57.010688  132605 logs.go:123] Gathering logs for dmesg ...
	I1210 01:13:57.010725  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:13:57.025463  132605 logs.go:123] Gathering logs for coredns [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04] ...
	I1210 01:13:57.025514  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:57.063908  132605 logs.go:123] Gathering logs for kube-proxy [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77] ...
	I1210 01:13:57.063939  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:57.102140  132605 logs.go:123] Gathering logs for kube-controller-manager [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f] ...
	I1210 01:13:57.102182  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:57.154429  132605 logs.go:123] Gathering logs for storage-provisioner [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6] ...
	I1210 01:13:57.154467  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:56.799397  133282 addons.go:510] duration metric: took 1.846376359s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 01:13:57.200860  133282 pod_ready.go:103] pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:59.697834  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:13:59.702097  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 200:
	ok
	I1210 01:13:59.703338  132605 api_server.go:141] control plane version: v1.31.2
	I1210 01:13:59.703360  132605 api_server.go:131] duration metric: took 3.907039005s to wait for apiserver health ...
	I1210 01:13:59.703368  132605 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:13:59.703389  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:13:59.703430  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:13:59.746795  132605 cri.go:89] found id: "0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:59.746815  132605 cri.go:89] found id: ""
	I1210 01:13:59.746822  132605 logs.go:282] 1 containers: [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c]
	I1210 01:13:59.746867  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.750673  132605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:13:59.750736  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:13:59.783121  132605 cri.go:89] found id: "bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:59.783154  132605 cri.go:89] found id: ""
	I1210 01:13:59.783163  132605 logs.go:282] 1 containers: [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490]
	I1210 01:13:59.783210  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.786822  132605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:13:59.786875  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:13:59.819075  132605 cri.go:89] found id: "7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:59.819096  132605 cri.go:89] found id: ""
	I1210 01:13:59.819103  132605 logs.go:282] 1 containers: [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04]
	I1210 01:13:59.819163  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.822836  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:13:59.822886  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:13:59.859388  132605 cri.go:89] found id: "c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:59.859418  132605 cri.go:89] found id: ""
	I1210 01:13:59.859428  132605 logs.go:282] 1 containers: [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692]
	I1210 01:13:59.859482  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.863388  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:13:59.863447  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:13:59.897967  132605 cri.go:89] found id: "eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:59.897987  132605 cri.go:89] found id: ""
	I1210 01:13:59.897994  132605 logs.go:282] 1 containers: [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77]
	I1210 01:13:59.898037  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.902198  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:13:59.902262  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:13:59.935685  132605 cri.go:89] found id: "7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:59.935713  132605 cri.go:89] found id: ""
	I1210 01:13:59.935724  132605 logs.go:282] 1 containers: [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f]
	I1210 01:13:59.935782  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.939600  132605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:13:59.939653  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:13:59.975763  132605 cri.go:89] found id: ""
	I1210 01:13:59.975797  132605 logs.go:282] 0 containers: []
	W1210 01:13:59.975810  132605 logs.go:284] No container was found matching "kindnet"
	I1210 01:13:59.975819  132605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 01:13:59.975886  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 01:14:00.014470  132605 cri.go:89] found id: "8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:14:00.014500  132605 cri.go:89] found id: "abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:14:00.014506  132605 cri.go:89] found id: ""
	I1210 01:14:00.014515  132605 logs.go:282] 2 containers: [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0]
	I1210 01:14:00.014589  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:14:00.018470  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:14:00.022628  132605 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:14:00.022650  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:14:00.126253  132605 logs.go:123] Gathering logs for kube-apiserver [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c] ...
	I1210 01:14:00.126280  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:14:00.168377  132605 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:14:00.168410  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:14:00.554305  132605 logs.go:123] Gathering logs for container status ...
	I1210 01:14:00.554349  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:14:00.597646  132605 logs.go:123] Gathering logs for kube-scheduler [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692] ...
	I1210 01:14:00.597673  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:14:00.638356  132605 logs.go:123] Gathering logs for kube-proxy [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77] ...
	I1210 01:14:00.638385  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:14:00.673027  132605 logs.go:123] Gathering logs for kube-controller-manager [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f] ...
	I1210 01:14:00.673058  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:14:00.736632  132605 logs.go:123] Gathering logs for storage-provisioner [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6] ...
	I1210 01:14:00.736667  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:14:00.771609  132605 logs.go:123] Gathering logs for kubelet ...
	I1210 01:14:00.771643  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:14:00.838511  132605 logs.go:123] Gathering logs for dmesg ...
	I1210 01:14:00.838542  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:14:00.853873  132605 logs.go:123] Gathering logs for etcd [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490] ...
	I1210 01:14:00.853901  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:14:00.903386  132605 logs.go:123] Gathering logs for coredns [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04] ...
	I1210 01:14:00.903417  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:14:00.940479  132605 logs.go:123] Gathering logs for storage-provisioner [abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0] ...
	I1210 01:14:00.940538  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:59.199815  133282 pod_ready.go:93] pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:59.199838  133282 pod_ready.go:82] duration metric: took 4.006706604s for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:59.199848  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:01.206809  133282 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:14:02.205417  133282 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:02.205439  133282 pod_ready.go:82] duration metric: took 3.005584799s for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:02.205449  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:03.479747  132605 system_pods.go:59] 8 kube-system pods found
	I1210 01:14:03.479776  132605 system_pods.go:61] "coredns-7c65d6cfc9-hhsm5" [dddb227a-7c16-4acd-be5f-1ab38b78129c] Running
	I1210 01:14:03.479781  132605 system_pods.go:61] "etcd-no-preload-584179" [acba2a13-196f-4ff9-8151-6b88578d532d] Running
	I1210 01:14:03.479785  132605 system_pods.go:61] "kube-apiserver-no-preload-584179" [20a67076-4ff2-4f31-b245-bf4079cd11d1] Running
	I1210 01:14:03.479789  132605 system_pods.go:61] "kube-controller-manager-no-preload-584179" [d7a2e531-1609-4c8a-b756-d9819339ed27] Running
	I1210 01:14:03.479791  132605 system_pods.go:61] "kube-proxy-xcjs2" [ec6cf5b1-3ea9-4868-874d-61e262cca0c5] Running
	I1210 01:14:03.479795  132605 system_pods.go:61] "kube-scheduler-no-preload-584179" [998543da-3056-4960-8762-9ab3dbd1925a] Running
	I1210 01:14:03.479800  132605 system_pods.go:61] "metrics-server-6867b74b74-lwgxd" [0e7f1063-8508-4f5b-b8ff-bbd387a53919] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:03.479804  132605 system_pods.go:61] "storage-provisioner" [31180637-f48e-4dda-8ec3-56155bb300cf] Running
	I1210 01:14:03.479813  132605 system_pods.go:74] duration metric: took 3.776438741s to wait for pod list to return data ...
	I1210 01:14:03.479820  132605 default_sa.go:34] waiting for default service account to be created ...
	I1210 01:14:03.482188  132605 default_sa.go:45] found service account: "default"
	I1210 01:14:03.482210  132605 default_sa.go:55] duration metric: took 2.383945ms for default service account to be created ...
	I1210 01:14:03.482218  132605 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 01:14:03.487172  132605 system_pods.go:86] 8 kube-system pods found
	I1210 01:14:03.487199  132605 system_pods.go:89] "coredns-7c65d6cfc9-hhsm5" [dddb227a-7c16-4acd-be5f-1ab38b78129c] Running
	I1210 01:14:03.487213  132605 system_pods.go:89] "etcd-no-preload-584179" [acba2a13-196f-4ff9-8151-6b88578d532d] Running
	I1210 01:14:03.487220  132605 system_pods.go:89] "kube-apiserver-no-preload-584179" [20a67076-4ff2-4f31-b245-bf4079cd11d1] Running
	I1210 01:14:03.487227  132605 system_pods.go:89] "kube-controller-manager-no-preload-584179" [d7a2e531-1609-4c8a-b756-d9819339ed27] Running
	I1210 01:14:03.487232  132605 system_pods.go:89] "kube-proxy-xcjs2" [ec6cf5b1-3ea9-4868-874d-61e262cca0c5] Running
	I1210 01:14:03.487239  132605 system_pods.go:89] "kube-scheduler-no-preload-584179" [998543da-3056-4960-8762-9ab3dbd1925a] Running
	I1210 01:14:03.487248  132605 system_pods.go:89] "metrics-server-6867b74b74-lwgxd" [0e7f1063-8508-4f5b-b8ff-bbd387a53919] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:03.487257  132605 system_pods.go:89] "storage-provisioner" [31180637-f48e-4dda-8ec3-56155bb300cf] Running
	I1210 01:14:03.487267  132605 system_pods.go:126] duration metric: took 5.043223ms to wait for k8s-apps to be running ...
	I1210 01:14:03.487278  132605 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 01:14:03.487331  132605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:14:03.503494  132605 system_svc.go:56] duration metric: took 16.208072ms WaitForService to wait for kubelet
	I1210 01:14:03.503520  132605 kubeadm.go:582] duration metric: took 4m22.208203921s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:14:03.503535  132605 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:14:03.506148  132605 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:14:03.506168  132605 node_conditions.go:123] node cpu capacity is 2
	I1210 01:14:03.506181  132605 node_conditions.go:105] duration metric: took 2.641093ms to run NodePressure ...
	I1210 01:14:03.506196  132605 start.go:241] waiting for startup goroutines ...
	I1210 01:14:03.506209  132605 start.go:246] waiting for cluster config update ...
	I1210 01:14:03.506228  132605 start.go:255] writing updated cluster config ...
	I1210 01:14:03.506542  132605 ssh_runner.go:195] Run: rm -f paused
	I1210 01:14:03.552082  132605 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 01:14:03.553885  132605 out.go:177] * Done! kubectl is now configured to use "no-preload-584179" cluster and "default" namespace by default
	I1210 01:14:04.212381  133282 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:14:05.212520  133282 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:05.212542  133282 pod_ready.go:82] duration metric: took 3.007086471s for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.212551  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mcrmk" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.218010  133282 pod_ready.go:93] pod "kube-proxy-mcrmk" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:05.218032  133282 pod_ready.go:82] duration metric: took 5.474042ms for pod "kube-proxy-mcrmk" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.218043  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.226656  133282 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:05.226677  133282 pod_ready.go:82] duration metric: took 8.62491ms for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.226685  133282 pod_ready.go:39] duration metric: took 10.040900009s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:14:05.226701  133282 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:14:05.226760  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:14:05.245203  133282 api_server.go:72] duration metric: took 10.292259038s to wait for apiserver process to appear ...
	I1210 01:14:05.245225  133282 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:14:05.245246  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:14:05.249103  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 200:
	ok
	I1210 01:14:05.250169  133282 api_server.go:141] control plane version: v1.31.2
	I1210 01:14:05.250186  133282 api_server.go:131] duration metric: took 4.954164ms to wait for apiserver health ...
	I1210 01:14:05.250191  133282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:14:05.256313  133282 system_pods.go:59] 9 kube-system pods found
	I1210 01:14:05.256338  133282 system_pods.go:61] "coredns-7c65d6cfc9-4snjr" [ee9574b0-7c13-4fd0-b268-47bef0687b7c] Running
	I1210 01:14:05.256343  133282 system_pods.go:61] "coredns-7c65d6cfc9-wr22x" [51e6d58d-7a5a-4739-94de-c53a8c8247ca] Running
	I1210 01:14:05.256347  133282 system_pods.go:61] "etcd-default-k8s-diff-port-901295" [3e38ffc7-a02d-4449-a166-29eca58e8545] Running
	I1210 01:14:05.256351  133282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-901295" [868a8472-8c93-4376-98c9-4d2bada6bfc0] Running
	I1210 01:14:05.256355  133282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-901295" [730047f5-2dbe-4a89-858a-33e2cc0f52eb] Running
	I1210 01:14:05.256358  133282 system_pods.go:61] "kube-proxy-mcrmk" [ffc0f612-5484-46b4-9515-41e0a981287f] Running
	I1210 01:14:05.256361  133282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-901295" [f5ae0336-bb5d-47ef-94f4-bfd4674adc8e] Running
	I1210 01:14:05.256366  133282 system_pods.go:61] "metrics-server-6867b74b74-rlg4g" [9aae955e-136b-4dbb-a5a5-f7490309bf4e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:05.256376  133282 system_pods.go:61] "storage-provisioner" [06a31677-c5d7-4380-80d3-ec80b787f570] Running
	I1210 01:14:05.256383  133282 system_pods.go:74] duration metric: took 6.186387ms to wait for pod list to return data ...
	I1210 01:14:05.256391  133282 default_sa.go:34] waiting for default service account to be created ...
	I1210 01:14:05.258701  133282 default_sa.go:45] found service account: "default"
	I1210 01:14:05.258720  133282 default_sa.go:55] duration metric: took 2.322746ms for default service account to be created ...
	I1210 01:14:05.258726  133282 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 01:14:05.262756  133282 system_pods.go:86] 9 kube-system pods found
	I1210 01:14:05.262776  133282 system_pods.go:89] "coredns-7c65d6cfc9-4snjr" [ee9574b0-7c13-4fd0-b268-47bef0687b7c] Running
	I1210 01:14:05.262781  133282 system_pods.go:89] "coredns-7c65d6cfc9-wr22x" [51e6d58d-7a5a-4739-94de-c53a8c8247ca] Running
	I1210 01:14:05.262785  133282 system_pods.go:89] "etcd-default-k8s-diff-port-901295" [3e38ffc7-a02d-4449-a166-29eca58e8545] Running
	I1210 01:14:05.262791  133282 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-901295" [868a8472-8c93-4376-98c9-4d2bada6bfc0] Running
	I1210 01:14:05.262795  133282 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-901295" [730047f5-2dbe-4a89-858a-33e2cc0f52eb] Running
	I1210 01:14:05.262799  133282 system_pods.go:89] "kube-proxy-mcrmk" [ffc0f612-5484-46b4-9515-41e0a981287f] Running
	I1210 01:14:05.262802  133282 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-901295" [f5ae0336-bb5d-47ef-94f4-bfd4674adc8e] Running
	I1210 01:14:05.262808  133282 system_pods.go:89] "metrics-server-6867b74b74-rlg4g" [9aae955e-136b-4dbb-a5a5-f7490309bf4e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:05.262812  133282 system_pods.go:89] "storage-provisioner" [06a31677-c5d7-4380-80d3-ec80b787f570] Running
	I1210 01:14:05.262821  133282 system_pods.go:126] duration metric: took 4.090244ms to wait for k8s-apps to be running ...
	I1210 01:14:05.262827  133282 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 01:14:05.262881  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:14:05.275937  133282 system_svc.go:56] duration metric: took 13.102664ms WaitForService to wait for kubelet
	I1210 01:14:05.275962  133282 kubeadm.go:582] duration metric: took 10.323025026s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:14:05.275984  133282 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:14:05.278184  133282 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:14:05.278204  133282 node_conditions.go:123] node cpu capacity is 2
	I1210 01:14:05.278217  133282 node_conditions.go:105] duration metric: took 2.226803ms to run NodePressure ...
	I1210 01:14:05.278230  133282 start.go:241] waiting for startup goroutines ...
	I1210 01:14:05.278239  133282 start.go:246] waiting for cluster config update ...
	I1210 01:14:05.278249  133282 start.go:255] writing updated cluster config ...
	I1210 01:14:05.278553  133282 ssh_runner.go:195] Run: rm -f paused
	I1210 01:14:05.326078  133282 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 01:14:05.327902  133282 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-901295" cluster and "default" namespace by default
	I1210 01:14:04.852302  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:14:04.852558  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:14:44.854749  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:14:44.854980  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:14:44.854992  133241 kubeadm.go:310] 
	I1210 01:14:44.855044  133241 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 01:14:44.855104  133241 kubeadm.go:310] 		timed out waiting for the condition
	I1210 01:14:44.855115  133241 kubeadm.go:310] 
	I1210 01:14:44.855162  133241 kubeadm.go:310] 	This error is likely caused by:
	I1210 01:14:44.855217  133241 kubeadm.go:310] 		- The kubelet is not running
	I1210 01:14:44.855363  133241 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 01:14:44.855380  133241 kubeadm.go:310] 
	I1210 01:14:44.855514  133241 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 01:14:44.855565  133241 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 01:14:44.855615  133241 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 01:14:44.855625  133241 kubeadm.go:310] 
	I1210 01:14:44.855796  133241 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 01:14:44.855943  133241 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 01:14:44.855955  133241 kubeadm.go:310] 
	I1210 01:14:44.856139  133241 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 01:14:44.856299  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 01:14:44.856402  133241 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 01:14:44.856500  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 01:14:44.856525  133241 kubeadm.go:310] 
	I1210 01:14:44.856764  133241 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:14:44.856891  133241 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 01:14:44.856987  133241 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1210 01:14:44.857195  133241 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 01:14:44.857249  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:14:45.319104  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:14:45.333243  133241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:14:45.342637  133241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:14:45.342653  133241 kubeadm.go:157] found existing configuration files:
	
	I1210 01:14:45.342696  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:14:45.351179  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:14:45.351227  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:14:45.359836  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:14:45.368986  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:14:45.369041  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:14:45.378166  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:14:45.387734  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:14:45.387781  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:14:45.397866  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:14:45.406757  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:14:45.406794  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:14:45.416506  133241 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:14:45.484342  133241 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 01:14:45.484462  133241 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:14:45.624435  133241 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:14:45.624583  133241 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:14:45.624732  133241 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 01:14:45.800410  133241 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:14:45.802184  133241 out.go:235]   - Generating certificates and keys ...
	I1210 01:14:45.802296  133241 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:14:45.802393  133241 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:14:45.802504  133241 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:14:45.802601  133241 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:14:45.802707  133241 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:14:45.802780  133241 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:14:45.802867  133241 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:14:45.803320  133241 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:14:45.804003  133241 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:14:45.804623  133241 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:14:45.804904  133241 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:14:45.804997  133241 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:14:45.989500  133241 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:14:46.228462  133241 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:14:46.274395  133241 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:14:46.765291  133241 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:14:46.784318  133241 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:14:46.785620  133241 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:14:46.785694  133241 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:14:46.915963  133241 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:14:46.917607  133241 out.go:235]   - Booting up control plane ...
	I1210 01:14:46.917714  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:14:46.924564  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:14:46.925924  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:14:46.926912  133241 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:14:46.929973  133241 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 01:15:26.932207  133241 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 01:15:26.932539  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:15:26.932718  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:15:31.933200  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:15:31.933463  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:15:41.934297  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:15:41.934592  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:16:01.935227  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:16:01.935409  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:16:41.934005  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:16:41.934329  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:16:41.934361  133241 kubeadm.go:310] 
	I1210 01:16:41.934433  133241 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 01:16:41.934492  133241 kubeadm.go:310] 		timed out waiting for the condition
	I1210 01:16:41.934500  133241 kubeadm.go:310] 
	I1210 01:16:41.934550  133241 kubeadm.go:310] 	This error is likely caused by:
	I1210 01:16:41.934610  133241 kubeadm.go:310] 		- The kubelet is not running
	I1210 01:16:41.934768  133241 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 01:16:41.934791  133241 kubeadm.go:310] 
	I1210 01:16:41.934915  133241 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 01:16:41.934971  133241 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 01:16:41.935024  133241 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 01:16:41.935033  133241 kubeadm.go:310] 
	I1210 01:16:41.935184  133241 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 01:16:41.935327  133241 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 01:16:41.935346  133241 kubeadm.go:310] 
	I1210 01:16:41.935485  133241 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 01:16:41.935600  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 01:16:41.935720  133241 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 01:16:41.935818  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 01:16:41.935828  133241 kubeadm.go:310] 
	I1210 01:16:41.936518  133241 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:16:41.936630  133241 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 01:16:41.936756  133241 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1210 01:16:41.936849  133241 kubeadm.go:394] duration metric: took 7m57.690847315s to StartCluster
	I1210 01:16:41.936924  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:16:41.936994  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:16:41.979911  133241 cri.go:89] found id: ""
	I1210 01:16:41.979944  133241 logs.go:282] 0 containers: []
	W1210 01:16:41.979955  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:16:41.979964  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:16:41.980037  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:16:42.018336  133241 cri.go:89] found id: ""
	I1210 01:16:42.018366  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.018378  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:16:42.018385  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:16:42.018461  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:16:42.050036  133241 cri.go:89] found id: ""
	I1210 01:16:42.050065  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.050074  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:16:42.050080  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:16:42.050139  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:16:42.083023  133241 cri.go:89] found id: ""
	I1210 01:16:42.083051  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.083063  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:16:42.083072  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:16:42.083131  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:16:42.117900  133241 cri.go:89] found id: ""
	I1210 01:16:42.117921  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.117930  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:16:42.117936  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:16:42.117982  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:16:42.150009  133241 cri.go:89] found id: ""
	I1210 01:16:42.150041  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.150054  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:16:42.150063  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:16:42.150116  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:16:42.182606  133241 cri.go:89] found id: ""
	I1210 01:16:42.182632  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.182643  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:16:42.182650  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:16:42.182712  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:16:42.223456  133241 cri.go:89] found id: ""
	I1210 01:16:42.223486  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.223496  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:16:42.223507  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:16:42.223522  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:16:42.287081  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:16:42.287118  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:16:42.308277  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:16:42.308315  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:16:42.401928  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:16:42.401960  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:16:42.401977  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:16:42.515786  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:16:42.515829  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 01:16:42.551865  133241 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1210 01:16:42.551924  133241 out.go:270] * 
	W1210 01:16:42.552001  133241 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 01:16:42.552019  133241 out.go:270] * 
	W1210 01:16:42.552906  133241 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 01:16:42.556458  133241 out.go:201] 
	W1210 01:16:42.557556  133241 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 01:16:42.557619  133241 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 01:16:42.557649  133241 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 01:16:42.559020  133241 out.go:201] 
	
	
	==> CRI-O <==
	Dec 10 01:28:29 no-preload-584179 crio[713]: time="2024-12-10 01:28:29.274305703Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4527a824-740d-49f1-a998-818243733621 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:29 no-preload-584179 crio[713]: time="2024-12-10 01:28:29.274616199Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6,PodSandboxId:43f1bc8c241bb7947b7902649f169fdb013bf7935b68fafb167b68ea59994060,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733793010215594431,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31180637-f48e-4dda-8ec3-56155bb300cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d865adaba29cda491dbb5cec02f6ea7f225383d7a4064ab8aa9807f38b5533e1,PodSandboxId:6160e4478c9ac914e0e8c54522dc81cb0900a2376982c7d8a6ffcd0cd79a295e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733792989265657531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6fc0c1f-b6a7-40ac-9e10-2a68f5b02115,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04,PodSandboxId:d191d9273f6a401b5a2eec9e6d63de064eef42f3d7669f3c58b9e89f95b40d87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733792987075240511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hhsm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dddb227a-7c16-4acd-be5f-1ab38b78129c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0,PodSandboxId:43f1bc8c241bb7947b7902649f169fdb013bf7935b68fafb167b68ea59994060,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733792979506569749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
1180637-f48e-4dda-8ec3-56155bb300cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77,PodSandboxId:bd89636873ac2a97f98a6cd383631fe3d947073f2f78eb55a13da90f04b3e9fd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733792979437640962,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xcjs2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6cf5b1-3ea9-4868-874d-61e262cca0
c5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692,PodSandboxId:8dc1b0cbaf251c9c2fa854cf86837997874f49fcf8e9da14d23bc4993cea75a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733792974645290830,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90467e946d30cd9fb80657e65b9e5082,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490,PodSandboxId:7a59f1561e3291477964a352e900b0cc99d32b65de5dee19de981bed42f907c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733792974639690387,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd3d7c542523abf822e63f6e3439952a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f,PodSandboxId:5e11ae93148941b0b34ca918752fdeb0aba213415416aabcd43583516fa74ab9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733792974628487120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1ea1d24e6f8505a1013a0d087fdda56,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c,PodSandboxId:6767f178755ebed93772ac822c14663c96ae3f0505621fe68b357e4a85fe031b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733792974617550855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4044cc8e6b1094ccd3d98e2ee8467661,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4527a824-740d-49f1-a998-818243733621 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:29 no-preload-584179 crio[713]: time="2024-12-10 01:28:29.318329995Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0a1b7d36-cfd5-41a5-9ea1-a9cc7a09282d name=/runtime.v1.RuntimeService/Version
	Dec 10 01:28:29 no-preload-584179 crio[713]: time="2024-12-10 01:28:29.318459689Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0a1b7d36-cfd5-41a5-9ea1-a9cc7a09282d name=/runtime.v1.RuntimeService/Version
	Dec 10 01:28:29 no-preload-584179 crio[713]: time="2024-12-10 01:28:29.320710075Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e02e3ba-741d-4524-9e04-08a15f9be53f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:28:29 no-preload-584179 crio[713]: time="2024-12-10 01:28:29.321530895Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794109321495687,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e02e3ba-741d-4524-9e04-08a15f9be53f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:28:29 no-preload-584179 crio[713]: time="2024-12-10 01:28:29.322462592Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d181351f-4813-4cd6-8c1b-81e836f89004 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:29 no-preload-584179 crio[713]: time="2024-12-10 01:28:29.322528208Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d181351f-4813-4cd6-8c1b-81e836f89004 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:29 no-preload-584179 crio[713]: time="2024-12-10 01:28:29.322723534Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6,PodSandboxId:43f1bc8c241bb7947b7902649f169fdb013bf7935b68fafb167b68ea59994060,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733793010215594431,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31180637-f48e-4dda-8ec3-56155bb300cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d865adaba29cda491dbb5cec02f6ea7f225383d7a4064ab8aa9807f38b5533e1,PodSandboxId:6160e4478c9ac914e0e8c54522dc81cb0900a2376982c7d8a6ffcd0cd79a295e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733792989265657531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6fc0c1f-b6a7-40ac-9e10-2a68f5b02115,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04,PodSandboxId:d191d9273f6a401b5a2eec9e6d63de064eef42f3d7669f3c58b9e89f95b40d87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733792987075240511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hhsm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dddb227a-7c16-4acd-be5f-1ab38b78129c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0,PodSandboxId:43f1bc8c241bb7947b7902649f169fdb013bf7935b68fafb167b68ea59994060,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733792979506569749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
1180637-f48e-4dda-8ec3-56155bb300cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77,PodSandboxId:bd89636873ac2a97f98a6cd383631fe3d947073f2f78eb55a13da90f04b3e9fd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733792979437640962,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xcjs2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6cf5b1-3ea9-4868-874d-61e262cca0
c5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692,PodSandboxId:8dc1b0cbaf251c9c2fa854cf86837997874f49fcf8e9da14d23bc4993cea75a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733792974645290830,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90467e946d30cd9fb80657e65b9e5082,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490,PodSandboxId:7a59f1561e3291477964a352e900b0cc99d32b65de5dee19de981bed42f907c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733792974639690387,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd3d7c542523abf822e63f6e3439952a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f,PodSandboxId:5e11ae93148941b0b34ca918752fdeb0aba213415416aabcd43583516fa74ab9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733792974628487120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1ea1d24e6f8505a1013a0d087fdda56,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c,PodSandboxId:6767f178755ebed93772ac822c14663c96ae3f0505621fe68b357e4a85fe031b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733792974617550855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4044cc8e6b1094ccd3d98e2ee8467661,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d181351f-4813-4cd6-8c1b-81e836f89004 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:29 no-preload-584179 crio[713]: time="2024-12-10 01:28:29.359701917Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=d6fea6a2-d0df-48b6-9dc5-546152e77278 name=/runtime.v1.RuntimeService/Status
	Dec 10 01:28:29 no-preload-584179 crio[713]: time="2024-12-10 01:28:29.359837808Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=d6fea6a2-d0df-48b6-9dc5-546152e77278 name=/runtime.v1.RuntimeService/Status
	Dec 10 01:28:29 no-preload-584179 crio[713]: time="2024-12-10 01:28:29.366961772Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b44b0a4-2a23-4353-99fd-219a715dfb16 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:28:29 no-preload-584179 crio[713]: time="2024-12-10 01:28:29.367106244Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b44b0a4-2a23-4353-99fd-219a715dfb16 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:28:29 no-preload-584179 crio[713]: time="2024-12-10 01:28:29.368875240Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d7d83910-dc1b-4805-af49-a12e344fab8d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:28:29 no-preload-584179 crio[713]: time="2024-12-10 01:28:29.369330004Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794109369299020,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d7d83910-dc1b-4805-af49-a12e344fab8d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:28:29 no-preload-584179 crio[713]: time="2024-12-10 01:28:29.370096713Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da36d181-a7fc-4d02-881d-5426788a5a21 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:29 no-preload-584179 crio[713]: time="2024-12-10 01:28:29.370172838Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da36d181-a7fc-4d02-881d-5426788a5a21 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:29 no-preload-584179 crio[713]: time="2024-12-10 01:28:29.370492721Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6,PodSandboxId:43f1bc8c241bb7947b7902649f169fdb013bf7935b68fafb167b68ea59994060,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733793010215594431,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31180637-f48e-4dda-8ec3-56155bb300cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d865adaba29cda491dbb5cec02f6ea7f225383d7a4064ab8aa9807f38b5533e1,PodSandboxId:6160e4478c9ac914e0e8c54522dc81cb0900a2376982c7d8a6ffcd0cd79a295e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733792989265657531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6fc0c1f-b6a7-40ac-9e10-2a68f5b02115,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04,PodSandboxId:d191d9273f6a401b5a2eec9e6d63de064eef42f3d7669f3c58b9e89f95b40d87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733792987075240511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hhsm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dddb227a-7c16-4acd-be5f-1ab38b78129c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0,PodSandboxId:43f1bc8c241bb7947b7902649f169fdb013bf7935b68fafb167b68ea59994060,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733792979506569749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
1180637-f48e-4dda-8ec3-56155bb300cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77,PodSandboxId:bd89636873ac2a97f98a6cd383631fe3d947073f2f78eb55a13da90f04b3e9fd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733792979437640962,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xcjs2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6cf5b1-3ea9-4868-874d-61e262cca0
c5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692,PodSandboxId:8dc1b0cbaf251c9c2fa854cf86837997874f49fcf8e9da14d23bc4993cea75a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733792974645290830,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90467e946d30cd9fb80657e65b9e5082,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490,PodSandboxId:7a59f1561e3291477964a352e900b0cc99d32b65de5dee19de981bed42f907c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733792974639690387,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd3d7c542523abf822e63f6e3439952a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f,PodSandboxId:5e11ae93148941b0b34ca918752fdeb0aba213415416aabcd43583516fa74ab9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733792974628487120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1ea1d24e6f8505a1013a0d087fdda56,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c,PodSandboxId:6767f178755ebed93772ac822c14663c96ae3f0505621fe68b357e4a85fe031b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733792974617550855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4044cc8e6b1094ccd3d98e2ee8467661,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da36d181-a7fc-4d02-881d-5426788a5a21 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:29 no-preload-584179 crio[713]: time="2024-12-10 01:28:29.409589212Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1444849c-ccb4-4b7d-90bf-bbe671bf1252 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:28:29 no-preload-584179 crio[713]: time="2024-12-10 01:28:29.409712101Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1444849c-ccb4-4b7d-90bf-bbe671bf1252 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:28:29 no-preload-584179 crio[713]: time="2024-12-10 01:28:29.411271332Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4a249645-bfae-4ea2-abe4-88136184900b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:28:29 no-preload-584179 crio[713]: time="2024-12-10 01:28:29.411696027Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794109411658403,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a249645-bfae-4ea2-abe4-88136184900b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:28:29 no-preload-584179 crio[713]: time="2024-12-10 01:28:29.412350594Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3815ca4c-2031-4528-b03d-1d6762d608a9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:29 no-preload-584179 crio[713]: time="2024-12-10 01:28:29.412422850Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3815ca4c-2031-4528-b03d-1d6762d608a9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:29 no-preload-584179 crio[713]: time="2024-12-10 01:28:29.412648383Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6,PodSandboxId:43f1bc8c241bb7947b7902649f169fdb013bf7935b68fafb167b68ea59994060,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733793010215594431,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31180637-f48e-4dda-8ec3-56155bb300cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d865adaba29cda491dbb5cec02f6ea7f225383d7a4064ab8aa9807f38b5533e1,PodSandboxId:6160e4478c9ac914e0e8c54522dc81cb0900a2376982c7d8a6ffcd0cd79a295e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733792989265657531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f6fc0c1f-b6a7-40ac-9e10-2a68f5b02115,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04,PodSandboxId:d191d9273f6a401b5a2eec9e6d63de064eef42f3d7669f3c58b9e89f95b40d87,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733792987075240511,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hhsm5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dddb227a-7c16-4acd-be5f-1ab38b78129c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0,PodSandboxId:43f1bc8c241bb7947b7902649f169fdb013bf7935b68fafb167b68ea59994060,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733792979506569749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3
1180637-f48e-4dda-8ec3-56155bb300cf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77,PodSandboxId:bd89636873ac2a97f98a6cd383631fe3d947073f2f78eb55a13da90f04b3e9fd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733792979437640962,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xcjs2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec6cf5b1-3ea9-4868-874d-61e262cca0
c5,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692,PodSandboxId:8dc1b0cbaf251c9c2fa854cf86837997874f49fcf8e9da14d23bc4993cea75a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733792974645290830,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90467e946d30cd9fb80657e65b9e5082,},Annotati
ons:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490,PodSandboxId:7a59f1561e3291477964a352e900b0cc99d32b65de5dee19de981bed42f907c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733792974639690387,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd3d7c542523abf822e63f6e3439952a,},Annotations:map[string]string{io.kubernetes.contain
er.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f,PodSandboxId:5e11ae93148941b0b34ca918752fdeb0aba213415416aabcd43583516fa74ab9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733792974628487120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1ea1d24e6f8505a1013a0d087fdda56,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c,PodSandboxId:6767f178755ebed93772ac822c14663c96ae3f0505621fe68b357e4a85fe031b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733792974617550855,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-584179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4044cc8e6b1094ccd3d98e2ee8467661,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3815ca4c-2031-4528-b03d-1d6762d608a9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8ccea68bfe8c4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       3                   43f1bc8c241bb       storage-provisioner
	d865adaba29cd       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago      Running             busybox                   1                   6160e4478c9ac       busybox
	7d559bbd79cd2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      18 minutes ago      Running             coredns                   1                   d191d9273f6a4       coredns-7c65d6cfc9-hhsm5
	abb7462dd698b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Exited              storage-provisioner       2                   43f1bc8c241bb       storage-provisioner
	eef419f8befc6       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      18 minutes ago      Running             kube-proxy                1                   bd89636873ac2       kube-proxy-xcjs2
	c9c3cf60e1de6       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      18 minutes ago      Running             kube-scheduler            1                   8dc1b0cbaf251       kube-scheduler-no-preload-584179
	bad358581c44d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      18 minutes ago      Running             etcd                      1                   7a59f1561e329       etcd-no-preload-584179
	7147c6004e066       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      18 minutes ago      Running             kube-controller-manager   1                   5e11ae9314894       kube-controller-manager-no-preload-584179
	0e94f76a99534       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      18 minutes ago      Running             kube-apiserver            1                   6767f178755eb       kube-apiserver-no-preload-584179
	
	
	==> coredns [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:53106 - 35035 "HINFO IN 457833088587050374.564137791752783472. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.021462694s
	
	
	==> describe nodes <==
	Name:               no-preload-584179
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-584179
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=no-preload-584179
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_10T00_59_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:59:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-584179
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 01:28:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 01:25:25 +0000   Tue, 10 Dec 2024 00:59:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 01:25:25 +0000   Tue, 10 Dec 2024 00:59:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 01:25:25 +0000   Tue, 10 Dec 2024 00:59:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 01:25:25 +0000   Tue, 10 Dec 2024 01:09:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.169
	  Hostname:    no-preload-584179
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 60d05cb18de2438e91da99c2b762f33f
	  System UUID:                60d05cb1-8de2-438e-91da-99c2b762f33f
	  Boot ID:                    8f8d21a7-9800-49be-b5b0-669683a98481
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 coredns-7c65d6cfc9-hhsm5                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-no-preload-584179                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-no-preload-584179             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-no-preload-584179    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-xcjs2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-no-preload-584179             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-lwgxd              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         27m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node no-preload-584179 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node no-preload-584179 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node no-preload-584179 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-584179 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-584179 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-584179 status is now: NodeHasSufficientPID
	  Normal  NodeReady                28m                kubelet          Node no-preload-584179 status is now: NodeReady
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-584179 event: Registered Node no-preload-584179 in Controller
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node no-preload-584179 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node no-preload-584179 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node no-preload-584179 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node no-preload-584179 event: Registered Node no-preload-584179 in Controller
	
	
	==> dmesg <==
	[Dec10 01:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053119] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042338] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Dec10 01:09] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.034606] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.581147] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.320088] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.053949] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.049066] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.200738] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.109829] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.250556] systemd-fstab-generator[704]: Ignoring "noauto" option for root device
	[ +15.002000] systemd-fstab-generator[1305]: Ignoring "noauto" option for root device
	[  +0.060801] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.949765] systemd-fstab-generator[1425]: Ignoring "noauto" option for root device
	[  +4.394534] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.197907] systemd-fstab-generator[2061]: Ignoring "noauto" option for root device
	[  +3.671201] kauditd_printk_skb: 61 callbacks suppressed
	[Dec10 01:10] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490] <==
	{"level":"info","ts":"2024-12-10T01:09:35.220453Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-10T01:09:35.220632Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.169:2380"}
	{"level":"info","ts":"2024-12-10T01:09:35.220662Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.169:2380"}
	{"level":"info","ts":"2024-12-10T01:09:37.045257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8345cbe35aa418e is starting a new election at term 2"}
	{"level":"info","ts":"2024-12-10T01:09:37.045372Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8345cbe35aa418e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-10T01:09:37.045407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8345cbe35aa418e received MsgPreVoteResp from f8345cbe35aa418e at term 2"}
	{"level":"info","ts":"2024-12-10T01:09:37.045421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8345cbe35aa418e became candidate at term 3"}
	{"level":"info","ts":"2024-12-10T01:09:37.045427Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8345cbe35aa418e received MsgVoteResp from f8345cbe35aa418e at term 3"}
	{"level":"info","ts":"2024-12-10T01:09:37.045435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8345cbe35aa418e became leader at term 3"}
	{"level":"info","ts":"2024-12-10T01:09:37.045449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f8345cbe35aa418e elected leader f8345cbe35aa418e at term 3"}
	{"level":"info","ts":"2024-12-10T01:09:37.085177Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-10T01:09:37.086168Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-10T01:09:37.086846Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-10T01:09:37.089766Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f8345cbe35aa418e","local-member-attributes":"{Name:no-preload-584179 ClientURLs:[https://192.168.50.169:2379]}","request-path":"/0/members/f8345cbe35aa418e/attributes","cluster-id":"8f5c98dd1b14dce8","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-10T01:09:37.090026Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-10T01:09:37.091225Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-10T01:09:37.091950Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.169:2379"}
	{"level":"info","ts":"2024-12-10T01:09:37.102852Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-10T01:09:37.102895Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-10T01:19:37.125183Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":878}
	{"level":"info","ts":"2024-12-10T01:19:37.134648Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":878,"took":"9.068924ms","hash":2104739600,"current-db-size-bytes":2666496,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2666496,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-12-10T01:19:37.134703Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2104739600,"revision":878,"compact-revision":-1}
	{"level":"info","ts":"2024-12-10T01:24:37.132207Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1120}
	{"level":"info","ts":"2024-12-10T01:24:37.136130Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1120,"took":"3.363774ms","hash":2008861796,"current-db-size-bytes":2666496,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1622016,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-12-10T01:24:37.136205Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2008861796,"revision":1120,"compact-revision":878}
	
	
	==> kernel <==
	 01:28:29 up 19 min,  0 users,  load average: 0.03, 0.06, 0.07
	Linux no-preload-584179 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c] <==
	E1210 01:24:39.423588       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1210 01:24:39.423678       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 01:24:39.425745       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 01:24:39.425743       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 01:25:39.426274       1 handler_proxy.go:99] no RequestInfo found in the context
	W1210 01:25:39.426533       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 01:25:39.426727       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1210 01:25:39.426781       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1210 01:25:39.427895       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 01:25:39.427979       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 01:27:39.428973       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 01:27:39.429221       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1210 01:27:39.428974       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 01:27:39.429318       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 01:27:39.430520       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 01:27:39.430563       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f] <==
	E1210 01:23:12.052380       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:23:12.496182       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:23:42.058534       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:23:42.502876       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:24:12.064651       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:24:12.509832       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:24:42.070952       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:24:42.517107       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:25:12.077134       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:25:12.525319       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 01:25:25.745551       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-584179"
	E1210 01:25:42.083612       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:25:42.532779       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 01:25:53.027527       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="250.106µs"
	I1210 01:26:08.027623       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="100.374µs"
	E1210 01:26:12.089275       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:26:12.540496       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:26:42.095772       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:26:42.547512       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:27:12.101898       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:27:12.555001       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:27:42.107863       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:27:42.565308       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:28:12.113718       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:28:12.576091       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1210 01:09:39.764829       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1210 01:09:39.778996       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.169"]
	E1210 01:09:39.779177       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 01:09:39.861413       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1210 01:09:39.861453       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 01:09:39.861486       1 server_linux.go:169] "Using iptables Proxier"
	I1210 01:09:39.866742       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 01:09:39.866995       1 server.go:483] "Version info" version="v1.31.2"
	I1210 01:09:39.867021       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 01:09:39.868776       1 config.go:199] "Starting service config controller"
	I1210 01:09:39.868817       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1210 01:09:39.868880       1 config.go:105] "Starting endpoint slice config controller"
	I1210 01:09:39.868898       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1210 01:09:39.869938       1 config.go:328] "Starting node config controller"
	I1210 01:09:39.869966       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1210 01:09:39.969482       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1210 01:09:39.969535       1 shared_informer.go:320] Caches are synced for service config
	I1210 01:09:39.970750       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692] <==
	W1210 01:09:38.387627       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1210 01:09:38.387706       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 01:09:38.387906       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1210 01:09:38.387942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 01:09:38.388111       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1210 01:09:38.388216       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1210 01:09:38.388239       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1210 01:09:38.388302       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:09:38.388395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1210 01:09:38.388448       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:09:38.388536       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1210 01:09:38.388624       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1210 01:09:38.388939       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1210 01:09:38.389012       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1210 01:09:38.391485       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1210 01:09:38.391598       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:09:38.391717       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1210 01:09:38.391795       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 01:09:38.391904       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1210 01:09:38.391936       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:09:38.392115       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1210 01:09:38.392193       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:09:38.394124       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1210 01:09:38.394161       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1210 01:09:39.473379       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 10 01:27:14 no-preload-584179 kubelet[1432]: E1210 01:27:14.278210    1432 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794034277633267,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:27:24 no-preload-584179 kubelet[1432]: E1210 01:27:24.280455    1432 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794044280132551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:27:24 no-preload-584179 kubelet[1432]: E1210 01:27:24.280503    1432 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794044280132551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:27:25 no-preload-584179 kubelet[1432]: E1210 01:27:25.012232    1432 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lwgxd" podUID="0e7f1063-8508-4f5b-b8ff-bbd387a53919"
	Dec 10 01:27:34 no-preload-584179 kubelet[1432]: E1210 01:27:34.054946    1432 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 10 01:27:34 no-preload-584179 kubelet[1432]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 10 01:27:34 no-preload-584179 kubelet[1432]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 10 01:27:34 no-preload-584179 kubelet[1432]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 10 01:27:34 no-preload-584179 kubelet[1432]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 10 01:27:34 no-preload-584179 kubelet[1432]: E1210 01:27:34.282436    1432 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794054282142471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:27:34 no-preload-584179 kubelet[1432]: E1210 01:27:34.282474    1432 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794054282142471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:27:39 no-preload-584179 kubelet[1432]: E1210 01:27:39.012839    1432 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lwgxd" podUID="0e7f1063-8508-4f5b-b8ff-bbd387a53919"
	Dec 10 01:27:44 no-preload-584179 kubelet[1432]: E1210 01:27:44.283873    1432 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794064283452089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:27:44 no-preload-584179 kubelet[1432]: E1210 01:27:44.283972    1432 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794064283452089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:27:53 no-preload-584179 kubelet[1432]: E1210 01:27:53.012992    1432 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lwgxd" podUID="0e7f1063-8508-4f5b-b8ff-bbd387a53919"
	Dec 10 01:27:54 no-preload-584179 kubelet[1432]: E1210 01:27:54.285601    1432 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794074285350012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:27:54 no-preload-584179 kubelet[1432]: E1210 01:27:54.285895    1432 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794074285350012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:28:04 no-preload-584179 kubelet[1432]: E1210 01:28:04.287402    1432 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794084287075254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:28:04 no-preload-584179 kubelet[1432]: E1210 01:28:04.287436    1432 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794084287075254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:28:08 no-preload-584179 kubelet[1432]: E1210 01:28:08.013185    1432 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lwgxd" podUID="0e7f1063-8508-4f5b-b8ff-bbd387a53919"
	Dec 10 01:28:14 no-preload-584179 kubelet[1432]: E1210 01:28:14.288734    1432 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794094288476395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:28:14 no-preload-584179 kubelet[1432]: E1210 01:28:14.288760    1432 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794094288476395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:28:23 no-preload-584179 kubelet[1432]: E1210 01:28:23.012680    1432 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lwgxd" podUID="0e7f1063-8508-4f5b-b8ff-bbd387a53919"
	Dec 10 01:28:24 no-preload-584179 kubelet[1432]: E1210 01:28:24.290202    1432 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794104289937186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:28:24 no-preload-584179 kubelet[1432]: E1210 01:28:24.290243    1432 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794104289937186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6] <==
	I1210 01:10:10.308605       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 01:10:10.319971       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 01:10:10.320146       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1210 01:10:27.717645       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 01:10:27.717930       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-584179_c88d5690-32b8-4f74-8f3b-f3bee45d3f11!
	I1210 01:10:27.718544       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6f33a069-539f-40b3-a154-c9bb954b4b41", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-584179_c88d5690-32b8-4f74-8f3b-f3bee45d3f11 became leader
	I1210 01:10:27.818913       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-584179_c88d5690-32b8-4f74-8f3b-f3bee45d3f11!
	
	
	==> storage-provisioner [abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0] <==
	I1210 01:09:39.658823       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1210 01:10:09.661878       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-584179 -n no-preload-584179
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-584179 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-lwgxd
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-584179 describe pod metrics-server-6867b74b74-lwgxd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-584179 describe pod metrics-server-6867b74b74-lwgxd: exit status 1 (63.777753ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-lwgxd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-584179 describe pod metrics-server-6867b74b74-lwgxd: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (323.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (455.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-901295 -n default-k8s-diff-port-901295
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-10 01:30:43.600224382 +0000 UTC m=+6436.679862522
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-901295 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-901295 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.512µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-901295 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-901295 -n default-k8s-diff-port-901295
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-901295 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-901295 logs -n 25: (1.363549861s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |   Profile   |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-796478 sudo crictl                           | auto-796478 | jenkins | v1.34.0 | 10 Dec 24 01:30 UTC | 10 Dec 24 01:30 UTC |
	|         | pods                                                 |             |         |         |                     |                     |
	| ssh     | -p auto-796478 sudo crictl ps                        | auto-796478 | jenkins | v1.34.0 | 10 Dec 24 01:30 UTC | 10 Dec 24 01:30 UTC |
	|         | --all                                                |             |         |         |                     |                     |
	| ssh     | -p auto-796478 sudo find                             | auto-796478 | jenkins | v1.34.0 | 10 Dec 24 01:30 UTC | 10 Dec 24 01:30 UTC |
	|         | /etc/cni -type f -exec sh -c                         |             |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |             |         |         |                     |                     |
	| ssh     | -p auto-796478 sudo ip a s                           | auto-796478 | jenkins | v1.34.0 | 10 Dec 24 01:30 UTC | 10 Dec 24 01:30 UTC |
	| ssh     | -p auto-796478 sudo ip r s                           | auto-796478 | jenkins | v1.34.0 | 10 Dec 24 01:30 UTC | 10 Dec 24 01:30 UTC |
	| ssh     | -p auto-796478 sudo                                  | auto-796478 | jenkins | v1.34.0 | 10 Dec 24 01:30 UTC | 10 Dec 24 01:30 UTC |
	|         | iptables-save                                        |             |         |         |                     |                     |
	| ssh     | -p auto-796478 sudo iptables                         | auto-796478 | jenkins | v1.34.0 | 10 Dec 24 01:30 UTC | 10 Dec 24 01:30 UTC |
	|         | -t nat -L -n -v                                      |             |         |         |                     |                     |
	| ssh     | -p auto-796478 sudo systemctl                        | auto-796478 | jenkins | v1.34.0 | 10 Dec 24 01:30 UTC | 10 Dec 24 01:30 UTC |
	|         | status kubelet --all --full                          |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-796478 sudo systemctl                        | auto-796478 | jenkins | v1.34.0 | 10 Dec 24 01:30 UTC | 10 Dec 24 01:30 UTC |
	|         | cat kubelet --no-pager                               |             |         |         |                     |                     |
	| ssh     | -p auto-796478 sudo journalctl                       | auto-796478 | jenkins | v1.34.0 | 10 Dec 24 01:30 UTC | 10 Dec 24 01:30 UTC |
	|         | -xeu kubelet --all --full                            |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-796478 sudo cat                              | auto-796478 | jenkins | v1.34.0 | 10 Dec 24 01:30 UTC | 10 Dec 24 01:30 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |             |         |         |                     |                     |
	| ssh     | -p auto-796478 sudo cat                              | auto-796478 | jenkins | v1.34.0 | 10 Dec 24 01:30 UTC | 10 Dec 24 01:30 UTC |
	|         | /var/lib/kubelet/config.yaml                         |             |         |         |                     |                     |
	| ssh     | -p auto-796478 sudo systemctl                        | auto-796478 | jenkins | v1.34.0 | 10 Dec 24 01:30 UTC |                     |
	|         | status docker --all --full                           |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-796478 sudo systemctl                        | auto-796478 | jenkins | v1.34.0 | 10 Dec 24 01:30 UTC | 10 Dec 24 01:30 UTC |
	|         | cat docker --no-pager                                |             |         |         |                     |                     |
	| ssh     | -p auto-796478 sudo cat                              | auto-796478 | jenkins | v1.34.0 | 10 Dec 24 01:30 UTC | 10 Dec 24 01:30 UTC |
	|         | /etc/docker/daemon.json                              |             |         |         |                     |                     |
	| ssh     | -p auto-796478 sudo docker                           | auto-796478 | jenkins | v1.34.0 | 10 Dec 24 01:30 UTC |                     |
	|         | system info                                          |             |         |         |                     |                     |
	| ssh     | -p auto-796478 sudo systemctl                        | auto-796478 | jenkins | v1.34.0 | 10 Dec 24 01:30 UTC |                     |
	|         | status cri-docker --all --full                       |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-796478 sudo systemctl                        | auto-796478 | jenkins | v1.34.0 | 10 Dec 24 01:30 UTC | 10 Dec 24 01:30 UTC |
	|         | cat cri-docker --no-pager                            |             |         |         |                     |                     |
	| ssh     | -p auto-796478 sudo cat                              | auto-796478 | jenkins | v1.34.0 | 10 Dec 24 01:30 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |             |         |         |                     |                     |
	| ssh     | -p auto-796478 sudo cat                              | auto-796478 | jenkins | v1.34.0 | 10 Dec 24 01:30 UTC | 10 Dec 24 01:30 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |             |         |         |                     |                     |
	| ssh     | -p auto-796478 sudo                                  | auto-796478 | jenkins | v1.34.0 | 10 Dec 24 01:30 UTC | 10 Dec 24 01:30 UTC |
	|         | cri-dockerd --version                                |             |         |         |                     |                     |
	| ssh     | -p auto-796478 sudo systemctl                        | auto-796478 | jenkins | v1.34.0 | 10 Dec 24 01:30 UTC |                     |
	|         | status containerd --all --full                       |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-796478 sudo systemctl                        | auto-796478 | jenkins | v1.34.0 | 10 Dec 24 01:30 UTC | 10 Dec 24 01:30 UTC |
	|         | cat containerd --no-pager                            |             |         |         |                     |                     |
	| ssh     | -p auto-796478 sudo cat                              | auto-796478 | jenkins | v1.34.0 | 10 Dec 24 01:30 UTC | 10 Dec 24 01:30 UTC |
	|         | /lib/systemd/system/containerd.service               |             |         |         |                     |                     |
	| ssh     | -p auto-796478 sudo cat                              | auto-796478 | jenkins | v1.34.0 | 10 Dec 24 01:30 UTC | 10 Dec 24 01:30 UTC |
	|         | /etc/containerd/config.toml                          |             |         |         |                     |                     |
	|---------|------------------------------------------------------|-------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/10 01:30:24
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 01:30:24.961187  142195 out.go:345] Setting OutFile to fd 1 ...
	I1210 01:30:24.961542  142195 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:30:24.961558  142195 out.go:358] Setting ErrFile to fd 2...
	I1210 01:30:24.961566  142195 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:30:24.961846  142195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 01:30:24.962466  142195 out.go:352] Setting JSON to false
	I1210 01:30:24.963445  142195 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":11576,"bootTime":1733782649,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 01:30:24.963502  142195 start.go:139] virtualization: kvm guest
	I1210 01:30:24.965783  142195 out.go:177] * [calico-796478] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 01:30:24.967247  142195 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 01:30:24.967266  142195 notify.go:220] Checking for updates...
	I1210 01:30:24.969721  142195 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 01:30:24.971052  142195 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:30:24.972430  142195 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 01:30:24.973629  142195 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 01:30:24.974879  142195 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 01:30:24.976653  142195 config.go:182] Loaded profile config "auto-796478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:30:24.976812  142195 config.go:182] Loaded profile config "default-k8s-diff-port-901295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:30:24.976947  142195 config.go:182] Loaded profile config "kindnet-796478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:30:24.977079  142195 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 01:30:25.013395  142195 out.go:177] * Using the kvm2 driver based on user configuration
	I1210 01:30:25.014640  142195 start.go:297] selected driver: kvm2
	I1210 01:30:25.014661  142195 start.go:901] validating driver "kvm2" against <nil>
	I1210 01:30:25.014676  142195 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 01:30:25.015649  142195 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 01:30:25.015731  142195 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 01:30:25.030655  142195 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1210 01:30:25.030693  142195 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1210 01:30:25.030922  142195 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:30:25.030956  142195 cni.go:84] Creating CNI manager for "calico"
	I1210 01:30:25.030965  142195 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1210 01:30:25.031024  142195 start.go:340] cluster config:
	{Name:calico-796478 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:calico-796478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:30:25.031159  142195 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 01:30:25.032735  142195 out.go:177] * Starting "calico-796478" primary control-plane node in "calico-796478" cluster
	I1210 01:30:25.033958  142195 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:30:25.033986  142195 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1210 01:30:25.033999  142195 cache.go:56] Caching tarball of preloaded images
	I1210 01:30:25.034086  142195 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 01:30:25.034100  142195 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 01:30:25.034213  142195 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/calico-796478/config.json ...
	I1210 01:30:25.034237  142195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/calico-796478/config.json: {Name:mkcefd5606000a9d8f5b7b395c9d3d2796d5418f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:30:25.034400  142195 start.go:360] acquireMachinesLock for calico-796478: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 01:30:25.034439  142195 start.go:364] duration metric: took 22.978µs to acquireMachinesLock for "calico-796478"
	I1210 01:30:25.034462  142195 start.go:93] Provisioning new machine with config: &{Name:calico-796478 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:calico-796478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 01:30:25.034548  142195 start.go:125] createHost starting for "" (driver="kvm2")
	I1210 01:30:25.036131  142195 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1210 01:30:25.036271  142195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:30:25.036311  142195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:30:25.051557  142195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46067
	I1210 01:30:25.051890  142195 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:30:25.052438  142195 main.go:141] libmachine: Using API Version  1
	I1210 01:30:25.052460  142195 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:30:25.052778  142195 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:30:25.052938  142195 main.go:141] libmachine: (calico-796478) Calling .GetMachineName
	I1210 01:30:25.053131  142195 main.go:141] libmachine: (calico-796478) Calling .DriverName
	I1210 01:30:25.053264  142195 start.go:159] libmachine.API.Create for "calico-796478" (driver="kvm2")
	I1210 01:30:25.053309  142195 client.go:168] LocalClient.Create starting
	I1210 01:30:25.053338  142195 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem
	I1210 01:30:25.053364  142195 main.go:141] libmachine: Decoding PEM data...
	I1210 01:30:25.053384  142195 main.go:141] libmachine: Parsing certificate...
	I1210 01:30:25.053429  142195 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem
	I1210 01:30:25.053447  142195 main.go:141] libmachine: Decoding PEM data...
	I1210 01:30:25.053461  142195 main.go:141] libmachine: Parsing certificate...
	I1210 01:30:25.053476  142195 main.go:141] libmachine: Running pre-create checks...
	I1210 01:30:25.053484  142195 main.go:141] libmachine: (calico-796478) Calling .PreCreateCheck
	I1210 01:30:25.053839  142195 main.go:141] libmachine: (calico-796478) Calling .GetConfigRaw
	I1210 01:30:25.054220  142195 main.go:141] libmachine: Creating machine...
	I1210 01:30:25.054233  142195 main.go:141] libmachine: (calico-796478) Calling .Create
	I1210 01:30:25.054344  142195 main.go:141] libmachine: (calico-796478) Creating KVM machine...
	I1210 01:30:25.055511  142195 main.go:141] libmachine: (calico-796478) DBG | found existing default KVM network
	I1210 01:30:25.056603  142195 main.go:141] libmachine: (calico-796478) DBG | I1210 01:30:25.056430  142218 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:cb:44:7d} reservation:<nil>}
	I1210 01:30:25.057410  142195 main.go:141] libmachine: (calico-796478) DBG | I1210 01:30:25.057341  142218 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:c5:80:1c} reservation:<nil>}
	I1210 01:30:25.058449  142195 main.go:141] libmachine: (calico-796478) DBG | I1210 01:30:25.058399  142218 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a7080}
	I1210 01:30:25.058514  142195 main.go:141] libmachine: (calico-796478) DBG | created network xml: 
	I1210 01:30:25.058532  142195 main.go:141] libmachine: (calico-796478) DBG | <network>
	I1210 01:30:25.058538  142195 main.go:141] libmachine: (calico-796478) DBG |   <name>mk-calico-796478</name>
	I1210 01:30:25.058542  142195 main.go:141] libmachine: (calico-796478) DBG |   <dns enable='no'/>
	I1210 01:30:25.058550  142195 main.go:141] libmachine: (calico-796478) DBG |   
	I1210 01:30:25.058573  142195 main.go:141] libmachine: (calico-796478) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1210 01:30:25.058599  142195 main.go:141] libmachine: (calico-796478) DBG |     <dhcp>
	I1210 01:30:25.058625  142195 main.go:141] libmachine: (calico-796478) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1210 01:30:25.058668  142195 main.go:141] libmachine: (calico-796478) DBG |     </dhcp>
	I1210 01:30:25.058694  142195 main.go:141] libmachine: (calico-796478) DBG |   </ip>
	I1210 01:30:25.058704  142195 main.go:141] libmachine: (calico-796478) DBG |   
	I1210 01:30:25.058716  142195 main.go:141] libmachine: (calico-796478) DBG | </network>
	I1210 01:30:25.058742  142195 main.go:141] libmachine: (calico-796478) DBG | 
	I1210 01:30:25.063475  142195 main.go:141] libmachine: (calico-796478) DBG | trying to create private KVM network mk-calico-796478 192.168.61.0/24...
	I1210 01:30:25.139859  142195 main.go:141] libmachine: (calico-796478) DBG | private KVM network mk-calico-796478 192.168.61.0/24 created
	I1210 01:30:25.139893  142195 main.go:141] libmachine: (calico-796478) Setting up store path in /home/jenkins/minikube-integration/20062-79135/.minikube/machines/calico-796478 ...
	I1210 01:30:25.139905  142195 main.go:141] libmachine: (calico-796478) DBG | I1210 01:30:25.139832  142218 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 01:30:25.139938  142195 main.go:141] libmachine: (calico-796478) Building disk image from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1210 01:30:25.139960  142195 main.go:141] libmachine: (calico-796478) Downloading /home/jenkins/minikube-integration/20062-79135/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1210 01:30:25.399650  142195 main.go:141] libmachine: (calico-796478) DBG | I1210 01:30:25.399522  142218 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/calico-796478/id_rsa...
	I1210 01:30:25.478477  142195 main.go:141] libmachine: (calico-796478) DBG | I1210 01:30:25.478347  142218 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/calico-796478/calico-796478.rawdisk...
	I1210 01:30:25.478508  142195 main.go:141] libmachine: (calico-796478) DBG | Writing magic tar header
	I1210 01:30:25.478522  142195 main.go:141] libmachine: (calico-796478) DBG | Writing SSH key tar header
	I1210 01:30:25.478533  142195 main.go:141] libmachine: (calico-796478) DBG | I1210 01:30:25.478465  142218 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/calico-796478 ...
	I1210 01:30:25.478673  142195 main.go:141] libmachine: (calico-796478) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/calico-796478
	I1210 01:30:25.478709  142195 main.go:141] libmachine: (calico-796478) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines/calico-796478 (perms=drwx------)
	I1210 01:30:25.478721  142195 main.go:141] libmachine: (calico-796478) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube/machines
	I1210 01:30:25.478734  142195 main.go:141] libmachine: (calico-796478) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 01:30:25.478746  142195 main.go:141] libmachine: (calico-796478) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20062-79135
	I1210 01:30:25.478759  142195 main.go:141] libmachine: (calico-796478) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1210 01:30:25.478769  142195 main.go:141] libmachine: (calico-796478) DBG | Checking permissions on dir: /home/jenkins
	I1210 01:30:25.478780  142195 main.go:141] libmachine: (calico-796478) DBG | Checking permissions on dir: /home
	I1210 01:30:25.478794  142195 main.go:141] libmachine: (calico-796478) DBG | Skipping /home - not owner
	I1210 01:30:25.478804  142195 main.go:141] libmachine: (calico-796478) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube/machines (perms=drwxr-xr-x)
	I1210 01:30:25.478815  142195 main.go:141] libmachine: (calico-796478) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135/.minikube (perms=drwxr-xr-x)
	I1210 01:30:25.478824  142195 main.go:141] libmachine: (calico-796478) Setting executable bit set on /home/jenkins/minikube-integration/20062-79135 (perms=drwxrwxr-x)
	I1210 01:30:25.478836  142195 main.go:141] libmachine: (calico-796478) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1210 01:30:25.478844  142195 main.go:141] libmachine: (calico-796478) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1210 01:30:25.478855  142195 main.go:141] libmachine: (calico-796478) Creating domain...
	I1210 01:30:25.480056  142195 main.go:141] libmachine: (calico-796478) define libvirt domain using xml: 
	I1210 01:30:25.480091  142195 main.go:141] libmachine: (calico-796478) <domain type='kvm'>
	I1210 01:30:25.480103  142195 main.go:141] libmachine: (calico-796478)   <name>calico-796478</name>
	I1210 01:30:25.480113  142195 main.go:141] libmachine: (calico-796478)   <memory unit='MiB'>3072</memory>
	I1210 01:30:25.480127  142195 main.go:141] libmachine: (calico-796478)   <vcpu>2</vcpu>
	I1210 01:30:25.480138  142195 main.go:141] libmachine: (calico-796478)   <features>
	I1210 01:30:25.480152  142195 main.go:141] libmachine: (calico-796478)     <acpi/>
	I1210 01:30:25.480162  142195 main.go:141] libmachine: (calico-796478)     <apic/>
	I1210 01:30:25.480172  142195 main.go:141] libmachine: (calico-796478)     <pae/>
	I1210 01:30:25.480205  142195 main.go:141] libmachine: (calico-796478)     
	I1210 01:30:25.480220  142195 main.go:141] libmachine: (calico-796478)   </features>
	I1210 01:30:25.480230  142195 main.go:141] libmachine: (calico-796478)   <cpu mode='host-passthrough'>
	I1210 01:30:25.480240  142195 main.go:141] libmachine: (calico-796478)   
	I1210 01:30:25.480251  142195 main.go:141] libmachine: (calico-796478)   </cpu>
	I1210 01:30:25.480261  142195 main.go:141] libmachine: (calico-796478)   <os>
	I1210 01:30:25.480278  142195 main.go:141] libmachine: (calico-796478)     <type>hvm</type>
	I1210 01:30:25.480292  142195 main.go:141] libmachine: (calico-796478)     <boot dev='cdrom'/>
	I1210 01:30:25.480304  142195 main.go:141] libmachine: (calico-796478)     <boot dev='hd'/>
	I1210 01:30:25.480314  142195 main.go:141] libmachine: (calico-796478)     <bootmenu enable='no'/>
	I1210 01:30:25.480325  142195 main.go:141] libmachine: (calico-796478)   </os>
	I1210 01:30:25.480336  142195 main.go:141] libmachine: (calico-796478)   <devices>
	I1210 01:30:25.480349  142195 main.go:141] libmachine: (calico-796478)     <disk type='file' device='cdrom'>
	I1210 01:30:25.480364  142195 main.go:141] libmachine: (calico-796478)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/calico-796478/boot2docker.iso'/>
	I1210 01:30:25.480377  142195 main.go:141] libmachine: (calico-796478)       <target dev='hdc' bus='scsi'/>
	I1210 01:30:25.480391  142195 main.go:141] libmachine: (calico-796478)       <readonly/>
	I1210 01:30:25.480402  142195 main.go:141] libmachine: (calico-796478)     </disk>
	I1210 01:30:25.480461  142195 main.go:141] libmachine: (calico-796478)     <disk type='file' device='disk'>
	I1210 01:30:25.480482  142195 main.go:141] libmachine: (calico-796478)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1210 01:30:25.480501  142195 main.go:141] libmachine: (calico-796478)       <source file='/home/jenkins/minikube-integration/20062-79135/.minikube/machines/calico-796478/calico-796478.rawdisk'/>
	I1210 01:30:25.480514  142195 main.go:141] libmachine: (calico-796478)       <target dev='hda' bus='virtio'/>
	I1210 01:30:25.480536  142195 main.go:141] libmachine: (calico-796478)     </disk>
	I1210 01:30:25.480553  142195 main.go:141] libmachine: (calico-796478)     <interface type='network'>
	I1210 01:30:25.480583  142195 main.go:141] libmachine: (calico-796478)       <source network='mk-calico-796478'/>
	I1210 01:30:25.480596  142195 main.go:141] libmachine: (calico-796478)       <model type='virtio'/>
	I1210 01:30:25.480605  142195 main.go:141] libmachine: (calico-796478)     </interface>
	I1210 01:30:25.480614  142195 main.go:141] libmachine: (calico-796478)     <interface type='network'>
	I1210 01:30:25.480623  142195 main.go:141] libmachine: (calico-796478)       <source network='default'/>
	I1210 01:30:25.480655  142195 main.go:141] libmachine: (calico-796478)       <model type='virtio'/>
	I1210 01:30:25.480668  142195 main.go:141] libmachine: (calico-796478)     </interface>
	I1210 01:30:25.480677  142195 main.go:141] libmachine: (calico-796478)     <serial type='pty'>
	I1210 01:30:25.480690  142195 main.go:141] libmachine: (calico-796478)       <target port='0'/>
	I1210 01:30:25.480700  142195 main.go:141] libmachine: (calico-796478)     </serial>
	I1210 01:30:25.480723  142195 main.go:141] libmachine: (calico-796478)     <console type='pty'>
	I1210 01:30:25.480754  142195 main.go:141] libmachine: (calico-796478)       <target type='serial' port='0'/>
	I1210 01:30:25.480768  142195 main.go:141] libmachine: (calico-796478)     </console>
	I1210 01:30:25.480780  142195 main.go:141] libmachine: (calico-796478)     <rng model='virtio'>
	I1210 01:30:25.480792  142195 main.go:141] libmachine: (calico-796478)       <backend model='random'>/dev/random</backend>
	I1210 01:30:25.480813  142195 main.go:141] libmachine: (calico-796478)     </rng>
	I1210 01:30:25.480826  142195 main.go:141] libmachine: (calico-796478)     
	I1210 01:30:25.480837  142195 main.go:141] libmachine: (calico-796478)     
	I1210 01:30:25.480846  142195 main.go:141] libmachine: (calico-796478)   </devices>
	I1210 01:30:25.480858  142195 main.go:141] libmachine: (calico-796478) </domain>
	I1210 01:30:25.480870  142195 main.go:141] libmachine: (calico-796478) 
	I1210 01:30:25.484935  142195 main.go:141] libmachine: (calico-796478) DBG | domain calico-796478 has defined MAC address 52:54:00:6e:9d:1e in network default
	I1210 01:30:25.485551  142195 main.go:141] libmachine: (calico-796478) Ensuring networks are active...
	I1210 01:30:25.485569  142195 main.go:141] libmachine: (calico-796478) DBG | domain calico-796478 has defined MAC address 52:54:00:cb:19:21 in network mk-calico-796478
	I1210 01:30:25.486269  142195 main.go:141] libmachine: (calico-796478) Ensuring network default is active
	I1210 01:30:25.486607  142195 main.go:141] libmachine: (calico-796478) Ensuring network mk-calico-796478 is active
	I1210 01:30:25.487193  142195 main.go:141] libmachine: (calico-796478) Getting domain xml...
	I1210 01:30:25.487970  142195 main.go:141] libmachine: (calico-796478) Creating domain...
	I1210 01:30:26.731662  142195 main.go:141] libmachine: (calico-796478) Waiting to get IP...
	I1210 01:30:26.733508  142195 main.go:141] libmachine: (calico-796478) DBG | domain calico-796478 has defined MAC address 52:54:00:cb:19:21 in network mk-calico-796478
	I1210 01:30:26.733959  142195 main.go:141] libmachine: (calico-796478) DBG | unable to find current IP address of domain calico-796478 in network mk-calico-796478
	I1210 01:30:26.734042  142195 main.go:141] libmachine: (calico-796478) DBG | I1210 01:30:26.733941  142218 retry.go:31] will retry after 259.579016ms: waiting for machine to come up
	I1210 01:30:26.995395  142195 main.go:141] libmachine: (calico-796478) DBG | domain calico-796478 has defined MAC address 52:54:00:cb:19:21 in network mk-calico-796478
	I1210 01:30:26.995876  142195 main.go:141] libmachine: (calico-796478) DBG | unable to find current IP address of domain calico-796478 in network mk-calico-796478
	I1210 01:30:26.995907  142195 main.go:141] libmachine: (calico-796478) DBG | I1210 01:30:26.995822  142218 retry.go:31] will retry after 326.157609ms: waiting for machine to come up
	I1210 01:30:27.323585  142195 main.go:141] libmachine: (calico-796478) DBG | domain calico-796478 has defined MAC address 52:54:00:cb:19:21 in network mk-calico-796478
	I1210 01:30:27.324154  142195 main.go:141] libmachine: (calico-796478) DBG | unable to find current IP address of domain calico-796478 in network mk-calico-796478
	I1210 01:30:27.324183  142195 main.go:141] libmachine: (calico-796478) DBG | I1210 01:30:27.324113  142218 retry.go:31] will retry after 336.299571ms: waiting for machine to come up
	I1210 01:30:27.661608  142195 main.go:141] libmachine: (calico-796478) DBG | domain calico-796478 has defined MAC address 52:54:00:cb:19:21 in network mk-calico-796478
	I1210 01:30:27.662116  142195 main.go:141] libmachine: (calico-796478) DBG | unable to find current IP address of domain calico-796478 in network mk-calico-796478
	I1210 01:30:27.662145  142195 main.go:141] libmachine: (calico-796478) DBG | I1210 01:30:27.662082  142218 retry.go:31] will retry after 470.520349ms: waiting for machine to come up
	I1210 01:30:28.135118  142195 main.go:141] libmachine: (calico-796478) DBG | domain calico-796478 has defined MAC address 52:54:00:cb:19:21 in network mk-calico-796478
	I1210 01:30:28.135377  142195 main.go:141] libmachine: (calico-796478) DBG | unable to find current IP address of domain calico-796478 in network mk-calico-796478
	I1210 01:30:28.135430  142195 main.go:141] libmachine: (calico-796478) DBG | I1210 01:30:28.135340  142218 retry.go:31] will retry after 551.550644ms: waiting for machine to come up
	I1210 01:30:28.688946  142195 main.go:141] libmachine: (calico-796478) DBG | domain calico-796478 has defined MAC address 52:54:00:cb:19:21 in network mk-calico-796478
	I1210 01:30:28.689457  142195 main.go:141] libmachine: (calico-796478) DBG | unable to find current IP address of domain calico-796478 in network mk-calico-796478
	I1210 01:30:28.689487  142195 main.go:141] libmachine: (calico-796478) DBG | I1210 01:30:28.689399  142218 retry.go:31] will retry after 653.272335ms: waiting for machine to come up
	I1210 01:30:29.344976  142195 main.go:141] libmachine: (calico-796478) DBG | domain calico-796478 has defined MAC address 52:54:00:cb:19:21 in network mk-calico-796478
	I1210 01:30:29.345562  142195 main.go:141] libmachine: (calico-796478) DBG | unable to find current IP address of domain calico-796478 in network mk-calico-796478
	I1210 01:30:29.345587  142195 main.go:141] libmachine: (calico-796478) DBG | I1210 01:30:29.345514  142218 retry.go:31] will retry after 1.031411867s: waiting for machine to come up
	I1210 01:30:30.378860  142195 main.go:141] libmachine: (calico-796478) DBG | domain calico-796478 has defined MAC address 52:54:00:cb:19:21 in network mk-calico-796478
	I1210 01:30:30.379360  142195 main.go:141] libmachine: (calico-796478) DBG | unable to find current IP address of domain calico-796478 in network mk-calico-796478
	I1210 01:30:30.379388  142195 main.go:141] libmachine: (calico-796478) DBG | I1210 01:30:30.379314  142218 retry.go:31] will retry after 1.296170914s: waiting for machine to come up
	I1210 01:30:31.677543  142195 main.go:141] libmachine: (calico-796478) DBG | domain calico-796478 has defined MAC address 52:54:00:cb:19:21 in network mk-calico-796478
	I1210 01:30:31.677999  142195 main.go:141] libmachine: (calico-796478) DBG | unable to find current IP address of domain calico-796478 in network mk-calico-796478
	I1210 01:30:31.678031  142195 main.go:141] libmachine: (calico-796478) DBG | I1210 01:30:31.677961  142218 retry.go:31] will retry after 1.31342287s: waiting for machine to come up
	I1210 01:30:32.993376  142195 main.go:141] libmachine: (calico-796478) DBG | domain calico-796478 has defined MAC address 52:54:00:cb:19:21 in network mk-calico-796478
	I1210 01:30:32.993866  142195 main.go:141] libmachine: (calico-796478) DBG | unable to find current IP address of domain calico-796478 in network mk-calico-796478
	I1210 01:30:32.993887  142195 main.go:141] libmachine: (calico-796478) DBG | I1210 01:30:32.993818  142218 retry.go:31] will retry after 2.144104824s: waiting for machine to come up
	I1210 01:30:35.140080  142195 main.go:141] libmachine: (calico-796478) DBG | domain calico-796478 has defined MAC address 52:54:00:cb:19:21 in network mk-calico-796478
	I1210 01:30:35.140573  142195 main.go:141] libmachine: (calico-796478) DBG | unable to find current IP address of domain calico-796478 in network mk-calico-796478
	I1210 01:30:35.140606  142195 main.go:141] libmachine: (calico-796478) DBG | I1210 01:30:35.140548  142218 retry.go:31] will retry after 1.984197354s: waiting for machine to come up
	I1210 01:30:37.129675  142195 main.go:141] libmachine: (calico-796478) DBG | domain calico-796478 has defined MAC address 52:54:00:cb:19:21 in network mk-calico-796478
	I1210 01:30:37.129708  142195 main.go:141] libmachine: (calico-796478) DBG | unable to find current IP address of domain calico-796478 in network mk-calico-796478
	I1210 01:30:37.129725  142195 main.go:141] libmachine: (calico-796478) DBG | I1210 01:30:37.128181  142218 retry.go:31] will retry after 2.833003018s: waiting for machine to come up
	
	
	==> CRI-O <==
	Dec 10 01:30:44 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:30:44.265066090Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794244265027663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ad2fa60-2863-4fd8-845b-37bd1ebe5ec5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:30:44 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:30:44.265594837Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a0cf301-c0fd-4fa6-ada5-9cc1381e9b7b name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:30:44 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:30:44.265689860Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a0cf301-c0fd-4fa6-ada5-9cc1381e9b7b name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:30:44 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:30:44.266064894Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52a45c139cf7b76bcdc156fe228ab6016f0aef5ca7255d7bb176f4a6e1e807ed,PodSandboxId:d77aa12393140c588f18d78e635b3238fa16dda524d42fa9d828f5bff7df347a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733793236880136802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06a31677-c5d7-4380-80d3-ec80b787f570,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6372d6d257a7bab18d3d0e3674bc099aeb54a633e485d8c2518a1b0a750266c,PodSandboxId:05bf9be9323da4b23832de3954969460c22ee4c80e104a595aff76fafd9f9ffc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793236464954507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wr22x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51e6d58d-7a5a-4739-94de-c53a8c8247ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a52f58a219a7b7c9a685b2223778247841c8895a0105e7248189c429be1df80,PodSandboxId:1c493db8f92176cd7305e2526fd6832d03a259d38f5491f3ec73ceb7669183a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793236326957951,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4snjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: ee9574b0-7c13-4fd0-b268-47bef0687b7c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c886354b0582999efa023713e31afbf5cc13ad3657fa2eac0228f85b5f645388,PodSandboxId:0aa07862419dfe0021db43b42aae875da5abd84a21bce96caec43b3bf9af9611,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733793235653961851,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mcrmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffc0f612-5484-46b4-9515-41e0a981287f,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f83dfbd84c1e473b83ee7d3618062bfdff726c45070bc6520caf7b97c3a3da,PodSandboxId:2311235619d5dc1fc5480e3b7b860c83e828acb82dcf3e09c115588bf7f425d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:173379322487969794
9,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea8ae8fc71b7d3a1e6a588ea52551088,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33af2665f03e95742ddd00f7788a2150f61bc045cd8ca9eb63be8b85cb41726f,PodSandboxId:5b8234923abb6fcc6ec3f105e60233f9ba0c304c166a3c7cebe4e9f9de14ba49,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:17337932249
18392555,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1601ef4c33ab24cae77f791aa4dae7ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a420a8c6b0391831013fbbe1a0a122bba6ea40169623e539dedd548363dd88,PodSandboxId:67bd39b722fbf512b20d685b7b41277541aedaad6037d72aa99e33f8c1d9a817,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17337
93224877343509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60cb8ec79d285fd901584e8d98fd0fd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d394dc046928540d723c13d1536ec41148a447216cd63353ea6189cc6a73951,PodSandboxId:f046688426640edd97a62aa36b96c10bb4bd5d299f6972fc61da215c858bdfd4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733793224818151233,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4df4d42b9990234c537f747b7c23c1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d2e60f0d4eb9452de688e92d066a121751141264f0919d60bd7af5e3836ae79,PodSandboxId:cabe55eda9171e4354e5bdbfaab8b448971681d8faafe5e81525f08084d9d69d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733792944791162309,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4df4d42b9990234c537f747b7c23c1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a0cf301-c0fd-4fa6-ada5-9cc1381e9b7b name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:30:44 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:30:44.309433309Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e750cfd0-3e5b-41ec-b421-147f6eadc478 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:30:44 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:30:44.309522188Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e750cfd0-3e5b-41ec-b421-147f6eadc478 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:30:44 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:30:44.310515400Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=145f1871-ba33-47bf-b05b-3065ceeeeb4e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:30:44 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:30:44.310971789Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794244310948987,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=145f1871-ba33-47bf-b05b-3065ceeeeb4e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:30:44 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:30:44.311403448Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50211e43-6ea2-4161-b806-852b552e4654 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:30:44 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:30:44.311469569Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50211e43-6ea2-4161-b806-852b552e4654 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:30:44 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:30:44.311677098Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52a45c139cf7b76bcdc156fe228ab6016f0aef5ca7255d7bb176f4a6e1e807ed,PodSandboxId:d77aa12393140c588f18d78e635b3238fa16dda524d42fa9d828f5bff7df347a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733793236880136802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06a31677-c5d7-4380-80d3-ec80b787f570,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6372d6d257a7bab18d3d0e3674bc099aeb54a633e485d8c2518a1b0a750266c,PodSandboxId:05bf9be9323da4b23832de3954969460c22ee4c80e104a595aff76fafd9f9ffc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793236464954507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wr22x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51e6d58d-7a5a-4739-94de-c53a8c8247ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a52f58a219a7b7c9a685b2223778247841c8895a0105e7248189c429be1df80,PodSandboxId:1c493db8f92176cd7305e2526fd6832d03a259d38f5491f3ec73ceb7669183a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793236326957951,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4snjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: ee9574b0-7c13-4fd0-b268-47bef0687b7c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c886354b0582999efa023713e31afbf5cc13ad3657fa2eac0228f85b5f645388,PodSandboxId:0aa07862419dfe0021db43b42aae875da5abd84a21bce96caec43b3bf9af9611,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733793235653961851,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mcrmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffc0f612-5484-46b4-9515-41e0a981287f,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f83dfbd84c1e473b83ee7d3618062bfdff726c45070bc6520caf7b97c3a3da,PodSandboxId:2311235619d5dc1fc5480e3b7b860c83e828acb82dcf3e09c115588bf7f425d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:173379322487969794
9,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea8ae8fc71b7d3a1e6a588ea52551088,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33af2665f03e95742ddd00f7788a2150f61bc045cd8ca9eb63be8b85cb41726f,PodSandboxId:5b8234923abb6fcc6ec3f105e60233f9ba0c304c166a3c7cebe4e9f9de14ba49,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:17337932249
18392555,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1601ef4c33ab24cae77f791aa4dae7ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a420a8c6b0391831013fbbe1a0a122bba6ea40169623e539dedd548363dd88,PodSandboxId:67bd39b722fbf512b20d685b7b41277541aedaad6037d72aa99e33f8c1d9a817,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17337
93224877343509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60cb8ec79d285fd901584e8d98fd0fd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d394dc046928540d723c13d1536ec41148a447216cd63353ea6189cc6a73951,PodSandboxId:f046688426640edd97a62aa36b96c10bb4bd5d299f6972fc61da215c858bdfd4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733793224818151233,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4df4d42b9990234c537f747b7c23c1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d2e60f0d4eb9452de688e92d066a121751141264f0919d60bd7af5e3836ae79,PodSandboxId:cabe55eda9171e4354e5bdbfaab8b448971681d8faafe5e81525f08084d9d69d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733792944791162309,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4df4d42b9990234c537f747b7c23c1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50211e43-6ea2-4161-b806-852b552e4654 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:30:44 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:30:44.351744340Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a7b5da31-6b34-48af-a7f4-6ea0992453ae name=/runtime.v1.RuntimeService/Version
	Dec 10 01:30:44 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:30:44.351911013Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a7b5da31-6b34-48af-a7f4-6ea0992453ae name=/runtime.v1.RuntimeService/Version
	Dec 10 01:30:44 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:30:44.353184201Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=edea31cc-449b-467e-98d7-7218c47a94f8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:30:44 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:30:44.353667447Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794244353639828,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=edea31cc-449b-467e-98d7-7218c47a94f8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:30:44 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:30:44.354240253Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60260379-a85a-4fac-819e-12130ba20f9f name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:30:44 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:30:44.354293770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60260379-a85a-4fac-819e-12130ba20f9f name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:30:44 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:30:44.354499097Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52a45c139cf7b76bcdc156fe228ab6016f0aef5ca7255d7bb176f4a6e1e807ed,PodSandboxId:d77aa12393140c588f18d78e635b3238fa16dda524d42fa9d828f5bff7df347a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733793236880136802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06a31677-c5d7-4380-80d3-ec80b787f570,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6372d6d257a7bab18d3d0e3674bc099aeb54a633e485d8c2518a1b0a750266c,PodSandboxId:05bf9be9323da4b23832de3954969460c22ee4c80e104a595aff76fafd9f9ffc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793236464954507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wr22x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51e6d58d-7a5a-4739-94de-c53a8c8247ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a52f58a219a7b7c9a685b2223778247841c8895a0105e7248189c429be1df80,PodSandboxId:1c493db8f92176cd7305e2526fd6832d03a259d38f5491f3ec73ceb7669183a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793236326957951,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4snjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: ee9574b0-7c13-4fd0-b268-47bef0687b7c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c886354b0582999efa023713e31afbf5cc13ad3657fa2eac0228f85b5f645388,PodSandboxId:0aa07862419dfe0021db43b42aae875da5abd84a21bce96caec43b3bf9af9611,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733793235653961851,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mcrmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffc0f612-5484-46b4-9515-41e0a981287f,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f83dfbd84c1e473b83ee7d3618062bfdff726c45070bc6520caf7b97c3a3da,PodSandboxId:2311235619d5dc1fc5480e3b7b860c83e828acb82dcf3e09c115588bf7f425d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:173379322487969794
9,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea8ae8fc71b7d3a1e6a588ea52551088,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33af2665f03e95742ddd00f7788a2150f61bc045cd8ca9eb63be8b85cb41726f,PodSandboxId:5b8234923abb6fcc6ec3f105e60233f9ba0c304c166a3c7cebe4e9f9de14ba49,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:17337932249
18392555,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1601ef4c33ab24cae77f791aa4dae7ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a420a8c6b0391831013fbbe1a0a122bba6ea40169623e539dedd548363dd88,PodSandboxId:67bd39b722fbf512b20d685b7b41277541aedaad6037d72aa99e33f8c1d9a817,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17337
93224877343509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60cb8ec79d285fd901584e8d98fd0fd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d394dc046928540d723c13d1536ec41148a447216cd63353ea6189cc6a73951,PodSandboxId:f046688426640edd97a62aa36b96c10bb4bd5d299f6972fc61da215c858bdfd4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733793224818151233,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4df4d42b9990234c537f747b7c23c1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d2e60f0d4eb9452de688e92d066a121751141264f0919d60bd7af5e3836ae79,PodSandboxId:cabe55eda9171e4354e5bdbfaab8b448971681d8faafe5e81525f08084d9d69d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733792944791162309,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4df4d42b9990234c537f747b7c23c1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=60260379-a85a-4fac-819e-12130ba20f9f name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:30:44 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:30:44.407866334Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b86602ba-3fb8-4b67-95a5-11367281bd66 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:30:44 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:30:44.407937191Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b86602ba-3fb8-4b67-95a5-11367281bd66 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:30:44 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:30:44.409840399Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=35253f3f-8661-4919-a034-2de0811ef15d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:30:44 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:30:44.410569100Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794244410535892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35253f3f-8661-4919-a034-2de0811ef15d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:30:44 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:30:44.411376915Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2ca067d-0bb9-477c-b596-d2030e3979df name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:30:44 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:30:44.411432328Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2ca067d-0bb9-477c-b596-d2030e3979df name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:30:44 default-k8s-diff-port-901295 crio[718]: time="2024-12-10 01:30:44.411635064Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52a45c139cf7b76bcdc156fe228ab6016f0aef5ca7255d7bb176f4a6e1e807ed,PodSandboxId:d77aa12393140c588f18d78e635b3238fa16dda524d42fa9d828f5bff7df347a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733793236880136802,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06a31677-c5d7-4380-80d3-ec80b787f570,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6372d6d257a7bab18d3d0e3674bc099aeb54a633e485d8c2518a1b0a750266c,PodSandboxId:05bf9be9323da4b23832de3954969460c22ee4c80e104a595aff76fafd9f9ffc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793236464954507,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-wr22x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51e6d58d-7a5a-4739-94de-c53a8c8247ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a52f58a219a7b7c9a685b2223778247841c8895a0105e7248189c429be1df80,PodSandboxId:1c493db8f92176cd7305e2526fd6832d03a259d38f5491f3ec73ceb7669183a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733793236326957951,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4snjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: ee9574b0-7c13-4fd0-b268-47bef0687b7c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c886354b0582999efa023713e31afbf5cc13ad3657fa2eac0228f85b5f645388,PodSandboxId:0aa07862419dfe0021db43b42aae875da5abd84a21bce96caec43b3bf9af9611,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733793235653961851,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mcrmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffc0f612-5484-46b4-9515-41e0a981287f,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f83dfbd84c1e473b83ee7d3618062bfdff726c45070bc6520caf7b97c3a3da,PodSandboxId:2311235619d5dc1fc5480e3b7b860c83e828acb82dcf3e09c115588bf7f425d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:173379322487969794
9,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea8ae8fc71b7d3a1e6a588ea52551088,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33af2665f03e95742ddd00f7788a2150f61bc045cd8ca9eb63be8b85cb41726f,PodSandboxId:5b8234923abb6fcc6ec3f105e60233f9ba0c304c166a3c7cebe4e9f9de14ba49,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:17337932249
18392555,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1601ef4c33ab24cae77f791aa4dae7ba,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a420a8c6b0391831013fbbe1a0a122bba6ea40169623e539dedd548363dd88,PodSandboxId:67bd39b722fbf512b20d685b7b41277541aedaad6037d72aa99e33f8c1d9a817,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:17337
93224877343509,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60cb8ec79d285fd901584e8d98fd0fd8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d394dc046928540d723c13d1536ec41148a447216cd63353ea6189cc6a73951,PodSandboxId:f046688426640edd97a62aa36b96c10bb4bd5d299f6972fc61da215c858bdfd4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733793224818151233,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4df4d42b9990234c537f747b7c23c1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d2e60f0d4eb9452de688e92d066a121751141264f0919d60bd7af5e3836ae79,PodSandboxId:cabe55eda9171e4354e5bdbfaab8b448971681d8faafe5e81525f08084d9d69d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733792944791162309,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-901295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4df4d42b9990234c537f747b7c23c1,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2ca067d-0bb9-477c-b596-d2030e3979df name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	52a45c139cf7b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   d77aa12393140       storage-provisioner
	f6372d6d257a7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   05bf9be9323da       coredns-7c65d6cfc9-wr22x
	5a52f58a219a7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   1c493db8f9217       coredns-7c65d6cfc9-4snjr
	c886354b05829       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   16 minutes ago      Running             kube-proxy                0                   0aa07862419df       kube-proxy-mcrmk
	33af2665f03e9       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   16 minutes ago      Running             kube-controller-manager   2                   5b8234923abb6       kube-controller-manager-default-k8s-diff-port-901295
	a5f83dfbd84c1       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   16 minutes ago      Running             kube-scheduler            2                   2311235619d5d       kube-scheduler-default-k8s-diff-port-901295
	e4a420a8c6b03       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   67bd39b722fbf       etcd-default-k8s-diff-port-901295
	8d394dc046928       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   16 minutes ago      Running             kube-apiserver            2                   f046688426640       kube-apiserver-default-k8s-diff-port-901295
	9d2e60f0d4eb9       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   21 minutes ago      Exited              kube-apiserver            1                   cabe55eda9171       kube-apiserver-default-k8s-diff-port-901295
	
	
	==> coredns [5a52f58a219a7b7c9a685b2223778247841c8895a0105e7248189c429be1df80] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [f6372d6d257a7bab18d3d0e3674bc099aeb54a633e485d8c2518a1b0a750266c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-901295
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-901295
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=default-k8s-diff-port-901295
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_10T01_13_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 01:13:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-901295
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 01:30:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 01:29:17 +0000   Tue, 10 Dec 2024 01:13:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 01:29:17 +0000   Tue, 10 Dec 2024 01:13:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 01:29:17 +0000   Tue, 10 Dec 2024 01:13:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 01:29:17 +0000   Tue, 10 Dec 2024 01:13:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.193
	  Hostname:    default-k8s-diff-port-901295
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ca8ebec5ac643cca4f6efe51370db7b
	  System UUID:                2ca8ebec-5ac6-43cc-a4f6-efe51370db7b
	  Boot ID:                    05788dbc-2bfa-4ea0-bfa7-aafcafe02894
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-4snjr                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-wr22x                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-default-k8s-diff-port-901295                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-901295             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-901295    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-mcrmk                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-901295             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-rlg4g                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node default-k8s-diff-port-901295 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node default-k8s-diff-port-901295 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node default-k8s-diff-port-901295 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node default-k8s-diff-port-901295 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node default-k8s-diff-port-901295 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node default-k8s-diff-port-901295 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node default-k8s-diff-port-901295 event: Registered Node default-k8s-diff-port-901295 in Controller
	
	
	==> dmesg <==
	[  +0.056298] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042060] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.062384] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.026226] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.441267] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.198852] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.057824] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061100] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.174382] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.136630] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.276898] systemd-fstab-generator[708]: Ignoring "noauto" option for root device
	[Dec10 01:09] systemd-fstab-generator[800]: Ignoring "noauto" option for root device
	[  +1.851621] systemd-fstab-generator[920]: Ignoring "noauto" option for root device
	[  +0.066667] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.507222] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.901427] kauditd_printk_skb: 85 callbacks suppressed
	[Dec10 01:13] systemd-fstab-generator[2625]: Ignoring "noauto" option for root device
	[  +0.061004] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.982050] systemd-fstab-generator[2945]: Ignoring "noauto" option for root device
	[  +0.079152] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.311877] systemd-fstab-generator[3055]: Ignoring "noauto" option for root device
	[  +0.095140] kauditd_printk_skb: 12 callbacks suppressed
	[Dec10 01:14] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [e4a420a8c6b0391831013fbbe1a0a122bba6ea40169623e539dedd548363dd88] <==
	{"level":"info","ts":"2024-12-10T01:13:46.198725Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-10T01:13:46.198879Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-10T01:13:46.199528Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.193:2379"}
	{"level":"info","ts":"2024-12-10T01:23:46.222572Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":721}
	{"level":"info","ts":"2024-12-10T01:23:46.231054Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":721,"took":"8.11591ms","hash":165793504,"current-db-size-bytes":2256896,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2256896,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-12-10T01:23:46.231118Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":165793504,"revision":721,"compact-revision":-1}
	{"level":"info","ts":"2024-12-10T01:28:46.229665Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":964}
	{"level":"info","ts":"2024-12-10T01:28:46.233551Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":964,"took":"3.119679ms","hash":646575440,"current-db-size-bytes":2256896,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1564672,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-12-10T01:28:46.233626Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":646575440,"revision":964,"compact-revision":721}
	{"level":"warn","ts":"2024-12-10T01:29:24.464350Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.267728ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10517756355859316985 > lease_revoke:<id:11f693ae211a0098>","response":"size:29"}
	{"level":"warn","ts":"2024-12-10T01:29:48.357238Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.940335ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-10T01:29:48.357852Z","caller":"traceutil/trace.go:171","msg":"trace[1449584356] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1259; }","duration":"143.536545ms","start":"2024-12-10T01:29:48.214275Z","end":"2024-12-10T01:29:48.357811Z","steps":["trace[1449584356] 'range keys from in-memory index tree'  (duration: 142.882395ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-10T01:29:48.771430Z","caller":"traceutil/trace.go:171","msg":"trace[1345332907] transaction","detail":"{read_only:false; response_revision:1260; number_of_response:1; }","duration":"387.735532ms","start":"2024-12-10T01:29:48.383670Z","end":"2024-12-10T01:29:48.771406Z","steps":["trace[1345332907] 'process raft request'  (duration: 387.605058ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-10T01:29:48.772178Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-10T01:29:48.383653Z","time spent":"387.906341ms","remote":"127.0.0.1:50590","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":693,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-gy7g4i6z6ko7j7xzikcslb3p2y\" mod_revision:1252 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-gy7g4i6z6ko7j7xzikcslb3p2y\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-gy7g4i6z6ko7j7xzikcslb3p2y\" > >"}
	{"level":"warn","ts":"2024-12-10T01:29:49.453640Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.973121ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-10T01:29:49.453912Z","caller":"traceutil/trace.go:171","msg":"trace[1828415899] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1261; }","duration":"239.253215ms","start":"2024-12-10T01:29:49.214643Z","end":"2024-12-10T01:29:49.453896Z","steps":["trace[1828415899] 'range keys from in-memory index tree'  (duration: 238.963025ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-10T01:29:49.863913Z","caller":"traceutil/trace.go:171","msg":"trace[1087415501] transaction","detail":"{read_only:false; response_revision:1262; number_of_response:1; }","duration":"137.14715ms","start":"2024-12-10T01:29:49.726747Z","end":"2024-12-10T01:29:49.863894Z","steps":["trace[1087415501] 'process raft request'  (duration: 136.956898ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-10T01:30:12.144698Z","caller":"traceutil/trace.go:171","msg":"trace[1086798453] linearizableReadLoop","detail":"{readStateIndex:1492; appliedIndex:1491; }","duration":"125.702302ms","start":"2024-12-10T01:30:12.018978Z","end":"2024-12-10T01:30:12.144680Z","steps":["trace[1086798453] 'read index received'  (duration: 125.520134ms)","trace[1086798453] 'applied index is now lower than readState.Index'  (duration: 181.497µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-10T01:30:12.144928Z","caller":"traceutil/trace.go:171","msg":"trace[677689807] transaction","detail":"{read_only:false; response_revision:1280; number_of_response:1; }","duration":"171.955822ms","start":"2024-12-10T01:30:11.972958Z","end":"2024-12-10T01:30:12.144914Z","steps":["trace[677689807] 'process raft request'  (duration: 171.578932ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-10T01:30:12.144940Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.942301ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-10T01:30:12.145909Z","caller":"traceutil/trace.go:171","msg":"trace[880202336] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1280; }","duration":"126.918055ms","start":"2024-12-10T01:30:12.018974Z","end":"2024-12-10T01:30:12.145892Z","steps":["trace[880202336] 'agreement among raft nodes before linearized reading'  (duration: 125.906008ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-10T01:30:24.426400Z","caller":"traceutil/trace.go:171","msg":"trace[1251326511] linearizableReadLoop","detail":"{readStateIndex:1504; appliedIndex:1503; }","duration":"212.043694ms","start":"2024-12-10T01:30:24.214338Z","end":"2024-12-10T01:30:24.426382Z","steps":["trace[1251326511] 'read index received'  (duration: 211.903537ms)","trace[1251326511] 'applied index is now lower than readState.Index'  (duration: 139.52µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-10T01:30:24.426604Z","caller":"traceutil/trace.go:171","msg":"trace[738172035] transaction","detail":"{read_only:false; response_revision:1290; number_of_response:1; }","duration":"224.630046ms","start":"2024-12-10T01:30:24.201961Z","end":"2024-12-10T01:30:24.426591Z","steps":["trace[738172035] 'process raft request'  (duration: 224.305822ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-10T01:30:24.426904Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.526261ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-10T01:30:24.427088Z","caller":"traceutil/trace.go:171","msg":"trace[2122444538] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1290; }","duration":"212.716316ms","start":"2024-12-10T01:30:24.214334Z","end":"2024-12-10T01:30:24.427050Z","steps":["trace[2122444538] 'agreement among raft nodes before linearized reading'  (duration: 212.504542ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:30:44 up 22 min,  0 users,  load average: 0.05, 0.13, 0.11
	Linux default-k8s-diff-port-901295 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8d394dc046928540d723c13d1536ec41148a447216cd63353ea6189cc6a73951] <==
	I1210 01:26:48.515734       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 01:26:48.515865       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 01:28:47.512998       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 01:28:47.513447       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1210 01:28:48.515467       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 01:28:48.515669       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1210 01:28:48.515827       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 01:28:48.515900       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1210 01:28:48.516882       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 01:28:48.516950       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1210 01:29:48.517977       1 handler_proxy.go:99] no RequestInfo found in the context
	W1210 01:29:48.517997       1 handler_proxy.go:99] no RequestInfo found in the context
	E1210 01:29:48.518298       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1210 01:29:48.518429       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1210 01:29:48.520249       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1210 01:29:48.520296       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [9d2e60f0d4eb9452de688e92d066a121751141264f0919d60bd7af5e3836ae79] <==
	W1210 01:13:40.812227       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.831876       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.851857       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.855226       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.858575       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.859868       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.872920       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.910741       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.925317       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.943956       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.946307       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.949675       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.952930       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:40.955307       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:41.030380       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:41.064271       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:41.084186       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:41.088560       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:41.146398       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:41.196502       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:41.200994       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:41.219903       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:41.344969       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:41.365856       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1210 01:13:41.522854       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [33af2665f03e95742ddd00f7788a2150f61bc045cd8ca9eb63be8b85cb41726f] <==
	E1210 01:25:24.622604       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:25:25.074291       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:25:54.628688       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:25:55.081195       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:26:24.634047       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:26:25.089943       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:26:54.642201       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:26:55.097610       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:27:24.648599       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:27:25.105595       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:27:54.656374       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:27:55.113275       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:28:24.663158       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:28:25.120987       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:28:54.672109       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:28:55.130921       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 01:29:17.637597       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-901295"
	E1210 01:29:24.679696       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:29:25.141336       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1210 01:29:54.687382       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:29:55.148560       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1210 01:30:08.915892       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="305.715µs"
	I1210 01:30:20.912316       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="160.175µs"
	E1210 01:30:24.694601       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1210 01:30:25.156816       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c886354b0582999efa023713e31afbf5cc13ad3657fa2eac0228f85b5f645388] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1210 01:13:56.178804       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1210 01:13:56.198101       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.193"]
	E1210 01:13:56.198189       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1210 01:13:56.306208       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1210 01:13:56.306292       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1210 01:13:56.306346       1 server_linux.go:169] "Using iptables Proxier"
	I1210 01:13:56.333033       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1210 01:13:56.333475       1 server.go:483] "Version info" version="v1.31.2"
	I1210 01:13:56.333497       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 01:13:56.370970       1 config.go:105] "Starting endpoint slice config controller"
	I1210 01:13:56.370994       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1210 01:13:56.371692       1 config.go:328] "Starting node config controller"
	I1210 01:13:56.371706       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1210 01:13:56.379046       1 config.go:199] "Starting service config controller"
	I1210 01:13:56.379207       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1210 01:13:56.471354       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1210 01:13:56.480063       1 shared_informer.go:320] Caches are synced for service config
	I1210 01:13:56.489250       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a5f83dfbd84c1e473b83ee7d3618062bfdff726c45070bc6520caf7b97c3a3da] <==
	W1210 01:13:47.551437       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1210 01:13:47.551972       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:47.551477       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1210 01:13:47.551990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:47.551512       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1210 01:13:47.552024       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:47.551587       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1210 01:13:47.552041       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:48.393042       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1210 01:13:48.393184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:48.543009       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1210 01:13:48.543093       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:48.634559       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1210 01:13:48.634632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:48.637892       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1210 01:13:48.638043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:48.656045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1210 01:13:48.656122       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:48.706062       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1210 01:13:48.706109       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1210 01:13:48.736707       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1210 01:13:48.736753       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1210 01:13:48.762534       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1210 01:13:48.762650       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1210 01:13:50.538326       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 10 01:29:49 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:29:49.918577    2952 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 10 01:29:49 default-k8s-diff-port-901295 kubelet[2952]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 10 01:29:49 default-k8s-diff-port-901295 kubelet[2952]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 10 01:29:49 default-k8s-diff-port-901295 kubelet[2952]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 10 01:29:49 default-k8s-diff-port-901295 kubelet[2952]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 10 01:29:50 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:29:50.170718    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794190170497665,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:29:50 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:29:50.170752    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794190170497665,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:29:55 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:29:55.913091    2952 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 10 01:29:55 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:29:55.913163    2952 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 10 01:29:55 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:29:55.913358    2952 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-99kjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountP
ropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile
:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-rlg4g_kube-system(9aae955e-136b-4dbb-a5a5-f7490309bf4e): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Dec 10 01:29:55 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:29:55.914682    2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-rlg4g" podUID="9aae955e-136b-4dbb-a5a5-f7490309bf4e"
	Dec 10 01:30:00 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:30:00.172051    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794200171752607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:30:00 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:30:00.172080    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794200171752607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:30:08 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:30:08.897502    2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rlg4g" podUID="9aae955e-136b-4dbb-a5a5-f7490309bf4e"
	Dec 10 01:30:10 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:30:10.173813    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794210173467773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:30:10 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:30:10.173860    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794210173467773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:30:20 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:30:20.174925    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794220174654499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:30:20 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:30:20.175211    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794220174654499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:30:20 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:30:20.898127    2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rlg4g" podUID="9aae955e-136b-4dbb-a5a5-f7490309bf4e"
	Dec 10 01:30:30 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:30:30.176736    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794230176360783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:30:30 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:30:30.176909    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794230176360783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:30:31 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:30:31.898943    2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rlg4g" podUID="9aae955e-136b-4dbb-a5a5-f7490309bf4e"
	Dec 10 01:30:40 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:30:40.178966    2952 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794240178326880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:30:40 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:30:40.179066    2952 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794240178326880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 10 01:30:43 default-k8s-diff-port-901295 kubelet[2952]: E1210 01:30:43.901224    2952 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-rlg4g" podUID="9aae955e-136b-4dbb-a5a5-f7490309bf4e"
	
	
	==> storage-provisioner [52a45c139cf7b76bcdc156fe228ab6016f0aef5ca7255d7bb176f4a6e1e807ed] <==
	I1210 01:13:56.992459       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 01:13:57.016061       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 01:13:57.016204       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1210 01:13:57.066399       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 01:13:57.073848       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eb814050-68d7-4c7a-9b72-ae74e9338a4f", APIVersion:"v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-901295_bbbdc540-9b50-4a68-bfcb-2088714f7baa became leader
	I1210 01:13:57.075043       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-901295_bbbdc540-9b50-4a68-bfcb-2088714f7baa!
	I1210 01:13:57.176002       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-901295_bbbdc540-9b50-4a68-bfcb-2088714f7baa!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-901295 -n default-k8s-diff-port-901295
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-901295 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-rlg4g
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-901295 describe pod metrics-server-6867b74b74-rlg4g
E1210 01:30:45.706580   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-901295 describe pod metrics-server-6867b74b74-rlg4g: exit status 1 (73.834303ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-rlg4g" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-901295 describe pod metrics-server-6867b74b74-rlg4g: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (455.68s)
E1210 01:32:42.094401   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (160.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.11:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.11:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-094470 -n old-k8s-version-094470
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-094470 -n old-k8s-version-094470: exit status 2 (234.692202ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-094470" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-094470 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-094470 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.854µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-094470 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094470 -n old-k8s-version-094470
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094470 -n old-k8s-version-094470: exit status 2 (225.65999ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-094470 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-094470 logs -n 25: (1.468486174s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-094470                              | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p cert-options-086522                                 | cert-options-086522          | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 00:58 UTC |
	| start   | -p no-preload-584179                                   | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 01:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-481624                           | kubernetes-upgrade-481624    | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 00:58 UTC |
	| start   | -p embed-certs-274758                                  | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 00:58 UTC | 10 Dec 24 01:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-290541                              | cert-expiration-290541       | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-290541                              | cert-expiration-290541       | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	| delete  | -p                                                     | disable-driver-mounts-371895 | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | disable-driver-mounts-371895                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:02 UTC |
	|         | default-k8s-diff-port-901295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-584179             | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-584179                                   | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-274758            | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC | 10 Dec 24 01:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-274758                                  | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-901295  | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:02 UTC | 10 Dec 24 01:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:02 UTC |                     |
	|         | default-k8s-diff-port-901295                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-094470        | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-584179                  | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-274758                 | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-584179                                   | no-preload-584179            | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC | 10 Dec 24 01:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-274758                                  | embed-certs-274758           | jenkins | v1.34.0 | 10 Dec 24 01:03 UTC | 10 Dec 24 01:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-901295       | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-094470                              | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC | 10 Dec 24 01:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-094470             | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC | 10 Dec 24 01:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-094470                              | old-k8s-version-094470       | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-901295 | jenkins | v1.34.0 | 10 Dec 24 01:04 UTC | 10 Dec 24 01:14 UTC |
	|         | default-k8s-diff-port-901295                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/10 01:04:42
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1210 01:04:42.604554  133282 out.go:345] Setting OutFile to fd 1 ...
	I1210 01:04:42.604645  133282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:04:42.604652  133282 out.go:358] Setting ErrFile to fd 2...
	I1210 01:04:42.604657  133282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 01:04:42.604818  133282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 01:04:42.605325  133282 out.go:352] Setting JSON to false
	I1210 01:04:42.606230  133282 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10034,"bootTime":1733782649,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 01:04:42.606360  133282 start.go:139] virtualization: kvm guest
	I1210 01:04:42.608505  133282 out.go:177] * [default-k8s-diff-port-901295] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 01:04:42.609651  133282 notify.go:220] Checking for updates...
	I1210 01:04:42.609661  133282 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 01:04:42.610866  133282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 01:04:42.611986  133282 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:04:42.613055  133282 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 01:04:42.614094  133282 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 01:04:42.615160  133282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 01:04:42.616546  133282 config.go:182] Loaded profile config "default-k8s-diff-port-901295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:04:42.616942  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:04:42.617000  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:04:42.631861  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39355
	I1210 01:04:42.632399  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:04:42.632966  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:04:42.632988  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:04:42.633389  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:04:42.633558  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:04:42.633822  133282 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 01:04:42.634105  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:04:42.634139  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:04:42.648371  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38389
	I1210 01:04:42.648775  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:04:42.649217  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:04:42.649238  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:04:42.649580  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:04:42.649752  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:04:42.680926  133282 out.go:177] * Using the kvm2 driver based on existing profile
	I1210 01:04:42.682339  133282 start.go:297] selected driver: kvm2
	I1210 01:04:42.682365  133282 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:04:42.682487  133282 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 01:04:42.683148  133282 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 01:04:42.683220  133282 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1210 01:04:42.697586  133282 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1210 01:04:42.697938  133282 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:04:42.697970  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:04:42.698011  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:04:42.698042  133282 start.go:340] cluster config:
	{Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:04:42.698139  133282 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1210 01:04:42.699685  133282 out.go:177] * Starting "default-k8s-diff-port-901295" primary control-plane node in "default-k8s-diff-port-901295" cluster
	I1210 01:04:39.721352  133241 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1210 01:04:39.721383  133241 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1210 01:04:39.721392  133241 cache.go:56] Caching tarball of preloaded images
	I1210 01:04:39.721455  133241 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 01:04:39.721464  133241 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1210 01:04:39.721545  133241 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/config.json ...
	I1210 01:04:39.721707  133241 start.go:360] acquireMachinesLock for old-k8s-version-094470: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 01:04:44.574793  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:04:42.700760  133282 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:04:42.700790  133282 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1210 01:04:42.700799  133282 cache.go:56] Caching tarball of preloaded images
	I1210 01:04:42.700867  133282 preload.go:172] Found /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1210 01:04:42.700878  133282 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1210 01:04:42.700976  133282 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/config.json ...
	I1210 01:04:42.701136  133282 start.go:360] acquireMachinesLock for default-k8s-diff-port-901295: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 01:04:50.654849  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:04:53.726828  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:04:59.806818  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:02.878819  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:08.958855  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:12.030796  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:18.110838  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:21.182849  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:27.262801  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:30.334793  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:36.414830  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:39.486794  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:45.566825  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:48.639043  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:54.718789  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:05:57.790796  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:03.870824  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:06.942805  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:13.023037  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:16.094961  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:22.174798  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:25.246892  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:31.326818  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:34.398846  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:40.478809  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:43.550800  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:49.630777  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:52.702808  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:06:58.783007  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:01.854776  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:07.934835  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:11.006837  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:17.086805  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:20.158819  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:26.238836  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:29.311060  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:35.390827  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:38.462976  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:44.542806  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:47.614800  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:53.694819  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:56.766790  132605 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.169:22: connect: no route to host
	I1210 01:07:59.770632  132693 start.go:364] duration metric: took 4m32.843409632s to acquireMachinesLock for "embed-certs-274758"
	I1210 01:07:59.770698  132693 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:07:59.770705  132693 fix.go:54] fixHost starting: 
	I1210 01:07:59.771174  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:07:59.771226  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:07:59.787289  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45723
	I1210 01:07:59.787787  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:07:59.788234  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:07:59.788258  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:07:59.788645  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:07:59.788824  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:07:59.788958  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:07:59.790595  132693 fix.go:112] recreateIfNeeded on embed-certs-274758: state=Stopped err=<nil>
	I1210 01:07:59.790631  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	W1210 01:07:59.790790  132693 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:07:59.792515  132693 out.go:177] * Restarting existing kvm2 VM for "embed-certs-274758" ...
	I1210 01:07:59.793607  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Start
	I1210 01:07:59.793771  132693 main.go:141] libmachine: (embed-certs-274758) Ensuring networks are active...
	I1210 01:07:59.794532  132693 main.go:141] libmachine: (embed-certs-274758) Ensuring network default is active
	I1210 01:07:59.794864  132693 main.go:141] libmachine: (embed-certs-274758) Ensuring network mk-embed-certs-274758 is active
	I1210 01:07:59.795317  132693 main.go:141] libmachine: (embed-certs-274758) Getting domain xml...
	I1210 01:07:59.796099  132693 main.go:141] libmachine: (embed-certs-274758) Creating domain...
	I1210 01:08:00.982632  132693 main.go:141] libmachine: (embed-certs-274758) Waiting to get IP...
	I1210 01:08:00.983591  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:00.984037  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:00.984077  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:00.984002  133990 retry.go:31] will retry after 285.753383ms: waiting for machine to come up
	I1210 01:08:01.272035  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:01.272490  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:01.272514  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:01.272423  133990 retry.go:31] will retry after 309.245833ms: waiting for machine to come up
	I1210 01:08:01.582873  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:01.583336  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:01.583382  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:01.583288  133990 retry.go:31] will retry after 451.016986ms: waiting for machine to come up
	I1210 01:07:59.768336  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:07:59.768370  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:07:59.768666  132605 buildroot.go:166] provisioning hostname "no-preload-584179"
	I1210 01:07:59.768702  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:07:59.768894  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:07:59.770491  132605 machine.go:96] duration metric: took 4m37.429107505s to provisionDockerMachine
	I1210 01:07:59.770535  132605 fix.go:56] duration metric: took 4m37.448303416s for fixHost
	I1210 01:07:59.770542  132605 start.go:83] releasing machines lock for "no-preload-584179", held for 4m37.448340626s
	W1210 01:07:59.770589  132605 start.go:714] error starting host: provision: host is not running
	W1210 01:07:59.770743  132605 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1210 01:07:59.770759  132605 start.go:729] Will try again in 5 seconds ...
	I1210 01:08:02.035970  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:02.036421  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:02.036443  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:02.036382  133990 retry.go:31] will retry after 408.436756ms: waiting for machine to come up
	I1210 01:08:02.445970  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:02.446515  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:02.446550  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:02.446445  133990 retry.go:31] will retry after 612.819219ms: waiting for machine to come up
	I1210 01:08:03.061377  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:03.061850  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:03.061879  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:03.061795  133990 retry.go:31] will retry after 867.345457ms: waiting for machine to come up
	I1210 01:08:03.930866  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:03.931316  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:03.931340  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:03.931259  133990 retry.go:31] will retry after 758.429736ms: waiting for machine to come up
	I1210 01:08:04.691061  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:04.691480  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:04.691511  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:04.691430  133990 retry.go:31] will retry after 1.278419765s: waiting for machine to come up
	I1210 01:08:05.972206  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:05.972645  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:05.972677  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:05.972596  133990 retry.go:31] will retry after 1.726404508s: waiting for machine to come up
	I1210 01:08:04.770968  132605 start.go:360] acquireMachinesLock for no-preload-584179: {Name:mk02db33dbfcec8136bb026deeb59bcf33be33d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1210 01:08:07.700170  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:07.700593  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:07.700615  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:07.700544  133990 retry.go:31] will retry after 2.286681333s: waiting for machine to come up
	I1210 01:08:09.989072  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:09.989424  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:09.989447  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:09.989383  133990 retry.go:31] will retry after 2.723565477s: waiting for machine to come up
	I1210 01:08:12.716204  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:12.716656  132693 main.go:141] libmachine: (embed-certs-274758) DBG | unable to find current IP address of domain embed-certs-274758 in network mk-embed-certs-274758
	I1210 01:08:12.716680  132693 main.go:141] libmachine: (embed-certs-274758) DBG | I1210 01:08:12.716618  133990 retry.go:31] will retry after 3.619683155s: waiting for machine to come up
	I1210 01:08:16.338854  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.339271  132693 main.go:141] libmachine: (embed-certs-274758) Found IP for machine: 192.168.72.76
	I1210 01:08:16.339301  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has current primary IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.339306  132693 main.go:141] libmachine: (embed-certs-274758) Reserving static IP address...
	I1210 01:08:16.339656  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "embed-certs-274758", mac: "52:54:00:d3:3c:b1", ip: "192.168.72.76"} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.339683  132693 main.go:141] libmachine: (embed-certs-274758) DBG | skip adding static IP to network mk-embed-certs-274758 - found existing host DHCP lease matching {name: "embed-certs-274758", mac: "52:54:00:d3:3c:b1", ip: "192.168.72.76"}
	I1210 01:08:16.339695  132693 main.go:141] libmachine: (embed-certs-274758) Reserved static IP address: 192.168.72.76
	I1210 01:08:16.339703  132693 main.go:141] libmachine: (embed-certs-274758) Waiting for SSH to be available...
	I1210 01:08:16.339715  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Getting to WaitForSSH function...
	I1210 01:08:16.341531  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.341776  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.341804  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.341963  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Using SSH client type: external
	I1210 01:08:16.341995  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa (-rw-------)
	I1210 01:08:16.342030  132693 main.go:141] libmachine: (embed-certs-274758) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.76 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:08:16.342047  132693 main.go:141] libmachine: (embed-certs-274758) DBG | About to run SSH command:
	I1210 01:08:16.342061  132693 main.go:141] libmachine: (embed-certs-274758) DBG | exit 0
	I1210 01:08:16.465930  132693 main.go:141] libmachine: (embed-certs-274758) DBG | SSH cmd err, output: <nil>: 
	I1210 01:08:16.466310  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetConfigRaw
	I1210 01:08:16.466921  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:16.469152  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.469472  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.469501  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.469754  132693 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/config.json ...
	I1210 01:08:16.469962  132693 machine.go:93] provisionDockerMachine start ...
	I1210 01:08:16.469982  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:16.470197  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.472368  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.472740  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.472765  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.472888  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.473052  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.473222  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.473325  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.473500  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:16.473737  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:16.473752  132693 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:08:16.581932  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:08:16.581963  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetMachineName
	I1210 01:08:16.582183  132693 buildroot.go:166] provisioning hostname "embed-certs-274758"
	I1210 01:08:16.582213  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetMachineName
	I1210 01:08:16.582412  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.584799  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.585092  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.585124  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.585264  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.585415  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.585568  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.585701  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.585836  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:16.586010  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:16.586026  132693 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-274758 && echo "embed-certs-274758" | sudo tee /etc/hostname
	I1210 01:08:16.707226  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-274758
	
	I1210 01:08:16.707260  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.709905  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.710192  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.710223  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.710428  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.710632  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.710828  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.710957  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.711127  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:16.711339  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:16.711356  132693 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-274758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-274758/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-274758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:08:17.578801  133241 start.go:364] duration metric: took 3m37.857041189s to acquireMachinesLock for "old-k8s-version-094470"
	I1210 01:08:17.578868  133241 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:08:17.578876  133241 fix.go:54] fixHost starting: 
	I1210 01:08:17.579295  133241 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:08:17.579353  133241 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:08:17.595770  133241 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46811
	I1210 01:08:17.596141  133241 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:08:17.596669  133241 main.go:141] libmachine: Using API Version  1
	I1210 01:08:17.596693  133241 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:08:17.597084  133241 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:08:17.597263  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:17.597405  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetState
	I1210 01:08:17.598931  133241 fix.go:112] recreateIfNeeded on old-k8s-version-094470: state=Stopped err=<nil>
	I1210 01:08:17.598957  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	W1210 01:08:17.599124  133241 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:08:17.600962  133241 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-094470" ...
	I1210 01:08:16.831001  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:08:16.831032  132693 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:08:16.831063  132693 buildroot.go:174] setting up certificates
	I1210 01:08:16.831074  132693 provision.go:84] configureAuth start
	I1210 01:08:16.831084  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetMachineName
	I1210 01:08:16.831362  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:16.833916  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.834282  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.834318  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.834446  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.836770  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.837061  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.837083  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.837216  132693 provision.go:143] copyHostCerts
	I1210 01:08:16.837284  132693 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:08:16.837303  132693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:08:16.837357  132693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:08:16.837447  132693 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:08:16.837455  132693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:08:16.837478  132693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:08:16.837528  132693 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:08:16.837535  132693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:08:16.837554  132693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:08:16.837609  132693 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.embed-certs-274758 san=[127.0.0.1 192.168.72.76 embed-certs-274758 localhost minikube]
	I1210 01:08:16.953590  132693 provision.go:177] copyRemoteCerts
	I1210 01:08:16.953649  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:08:16.953676  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:16.956012  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.956347  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:16.956384  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:16.956544  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:16.956703  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:16.956828  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:16.956951  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.039674  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:08:17.061125  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1210 01:08:17.082062  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 01:08:17.102519  132693 provision.go:87] duration metric: took 271.416512ms to configureAuth
	I1210 01:08:17.102554  132693 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:08:17.102745  132693 config.go:182] Loaded profile config "embed-certs-274758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:08:17.102858  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.105469  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.105818  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.105849  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.105976  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.106169  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.106326  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.106468  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.106639  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:17.106804  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:17.106817  132693 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:08:17.339841  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:08:17.339873  132693 machine.go:96] duration metric: took 869.895063ms to provisionDockerMachine
	I1210 01:08:17.339888  132693 start.go:293] postStartSetup for "embed-certs-274758" (driver="kvm2")
	I1210 01:08:17.339902  132693 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:08:17.339921  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.340256  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:08:17.340295  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.342633  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.342947  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.342973  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.343127  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.343294  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.343441  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.343545  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.428245  132693 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:08:17.432486  132693 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:08:17.432507  132693 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:08:17.432568  132693 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:08:17.432650  132693 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:08:17.432756  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:08:17.441892  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:17.464515  132693 start.go:296] duration metric: took 124.610801ms for postStartSetup
	I1210 01:08:17.464558  132693 fix.go:56] duration metric: took 17.693851707s for fixHost
	I1210 01:08:17.464592  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.467173  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.467470  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.467494  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.467622  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.467829  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.467976  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.468111  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.468253  132693 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:17.468418  132693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.76 22 <nil> <nil>}
	I1210 01:08:17.468429  132693 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:08:17.578630  132693 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792897.551711245
	
	I1210 01:08:17.578653  132693 fix.go:216] guest clock: 1733792897.551711245
	I1210 01:08:17.578662  132693 fix.go:229] Guest: 2024-12-10 01:08:17.551711245 +0000 UTC Remote: 2024-12-10 01:08:17.464575547 +0000 UTC m=+290.672639525 (delta=87.135698ms)
	I1210 01:08:17.578690  132693 fix.go:200] guest clock delta is within tolerance: 87.135698ms
	I1210 01:08:17.578697  132693 start.go:83] releasing machines lock for "embed-certs-274758", held for 17.808018239s
	I1210 01:08:17.578727  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.578978  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:17.581740  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.582079  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.582105  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.582272  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.582792  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.582970  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:08:17.583053  132693 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:08:17.583108  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.583173  132693 ssh_runner.go:195] Run: cat /version.json
	I1210 01:08:17.583203  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:08:17.585727  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586056  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586096  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.586121  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586268  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.586447  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.586496  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:17.586525  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:17.586661  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.586665  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:08:17.586853  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:08:17.586851  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.587016  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:08:17.587145  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:08:17.689525  132693 ssh_runner.go:195] Run: systemctl --version
	I1210 01:08:17.696586  132693 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:08:17.838483  132693 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:08:17.844291  132693 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:08:17.844381  132693 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:08:17.858838  132693 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:08:17.858864  132693 start.go:495] detecting cgroup driver to use...
	I1210 01:08:17.858926  132693 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:08:17.875144  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:08:17.887694  132693 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:08:17.887750  132693 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:08:17.900263  132693 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:08:17.916462  132693 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:08:18.050837  132693 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:08:18.237065  132693 docker.go:233] disabling docker service ...
	I1210 01:08:18.237134  132693 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:08:18.254596  132693 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:08:18.267028  132693 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:08:18.384379  132693 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:08:18.511930  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:08:18.525729  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:08:18.544642  132693 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 01:08:18.544693  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.555569  132693 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:08:18.555629  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.565952  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.575954  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.589571  132693 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:08:18.604400  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.615079  132693 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.631811  132693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:18.641877  132693 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:08:18.651229  132693 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:08:18.651284  132693 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:08:18.663922  132693 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:08:18.673755  132693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:18.804115  132693 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:08:18.902371  132693 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:08:18.902453  132693 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:08:18.906806  132693 start.go:563] Will wait 60s for crictl version
	I1210 01:08:18.906876  132693 ssh_runner.go:195] Run: which crictl
	I1210 01:08:18.910409  132693 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:08:18.957196  132693 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:08:18.957293  132693 ssh_runner.go:195] Run: crio --version
	I1210 01:08:18.983326  132693 ssh_runner.go:195] Run: crio --version
	I1210 01:08:19.021374  132693 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 01:08:17.602512  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .Start
	I1210 01:08:17.602729  133241 main.go:141] libmachine: (old-k8s-version-094470) Ensuring networks are active...
	I1210 01:08:17.603418  133241 main.go:141] libmachine: (old-k8s-version-094470) Ensuring network default is active
	I1210 01:08:17.603788  133241 main.go:141] libmachine: (old-k8s-version-094470) Ensuring network mk-old-k8s-version-094470 is active
	I1210 01:08:17.604284  133241 main.go:141] libmachine: (old-k8s-version-094470) Getting domain xml...
	I1210 01:08:17.605020  133241 main.go:141] libmachine: (old-k8s-version-094470) Creating domain...
	I1210 01:08:18.869767  133241 main.go:141] libmachine: (old-k8s-version-094470) Waiting to get IP...
	I1210 01:08:18.870786  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:18.871226  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:18.871282  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:18.871190  134112 retry.go:31] will retry after 260.195661ms: waiting for machine to come up
	I1210 01:08:19.132624  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:19.133091  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:19.133113  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:19.133034  134112 retry.go:31] will retry after 241.852579ms: waiting for machine to come up
	I1210 01:08:19.376814  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:19.377485  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:19.377520  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:19.377420  134112 retry.go:31] will retry after 410.574957ms: waiting for machine to come up
	I1210 01:08:19.023096  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetIP
	I1210 01:08:19.026231  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:19.026697  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:08:19.026740  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:08:19.026981  132693 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1210 01:08:19.031042  132693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:19.043510  132693 kubeadm.go:883] updating cluster {Name:embed-certs-274758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-274758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:08:19.043679  132693 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:08:19.043747  132693 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:19.075804  132693 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 01:08:19.075875  132693 ssh_runner.go:195] Run: which lz4
	I1210 01:08:19.079498  132693 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 01:08:19.083365  132693 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 01:08:19.083394  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1210 01:08:20.282126  132693 crio.go:462] duration metric: took 1.202670831s to copy over tarball
	I1210 01:08:20.282224  132693 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 01:08:19.790282  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:19.790868  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:19.790898  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:19.790828  134112 retry.go:31] will retry after 535.183165ms: waiting for machine to come up
	I1210 01:08:20.327434  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:20.327936  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:20.327972  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:20.327862  134112 retry.go:31] will retry after 729.193633ms: waiting for machine to come up
	I1210 01:08:21.058815  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:21.059274  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:21.059302  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:21.059224  134112 retry.go:31] will retry after 578.788415ms: waiting for machine to come up
	I1210 01:08:21.640036  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:21.640572  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:21.640604  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:21.640523  134112 retry.go:31] will retry after 1.113559472s: waiting for machine to come up
	I1210 01:08:22.755259  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:22.755716  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:22.755741  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:22.755681  134112 retry.go:31] will retry after 940.416935ms: waiting for machine to come up
	I1210 01:08:23.698216  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:23.698652  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:23.698684  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:23.698608  134112 retry.go:31] will retry after 1.575038679s: waiting for machine to come up
	I1210 01:08:22.359701  132693 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.077440918s)
	I1210 01:08:22.359757  132693 crio.go:469] duration metric: took 2.077602088s to extract the tarball
	I1210 01:08:22.359770  132693 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 01:08:22.404915  132693 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:22.444497  132693 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 01:08:22.444531  132693 cache_images.go:84] Images are preloaded, skipping loading
	I1210 01:08:22.444543  132693 kubeadm.go:934] updating node { 192.168.72.76 8443 v1.31.2 crio true true} ...
	I1210 01:08:22.444702  132693 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-274758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-274758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:08:22.444801  132693 ssh_runner.go:195] Run: crio config
	I1210 01:08:22.484278  132693 cni.go:84] Creating CNI manager for ""
	I1210 01:08:22.484301  132693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:08:22.484311  132693 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:08:22.484345  132693 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.76 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-274758 NodeName:embed-certs-274758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.76"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.76 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 01:08:22.484508  132693 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.76
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-274758"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.76"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.76"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:08:22.484573  132693 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 01:08:22.493746  132693 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:08:22.493827  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:08:22.503898  132693 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1210 01:08:22.520349  132693 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:08:22.536653  132693 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1210 01:08:22.553389  132693 ssh_runner.go:195] Run: grep 192.168.72.76	control-plane.minikube.internal$ /etc/hosts
	I1210 01:08:22.556933  132693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.76	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:22.569060  132693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:22.709124  132693 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:08:22.728316  132693 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758 for IP: 192.168.72.76
	I1210 01:08:22.728342  132693 certs.go:194] generating shared ca certs ...
	I1210 01:08:22.728382  132693 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:08:22.728564  132693 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:08:22.728619  132693 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:08:22.728633  132693 certs.go:256] generating profile certs ...
	I1210 01:08:22.728764  132693 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/client.key
	I1210 01:08:22.728852  132693 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/apiserver.key.ec69c041
	I1210 01:08:22.728906  132693 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/proxy-client.key
	I1210 01:08:22.729067  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:08:22.729121  132693 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:08:22.729144  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:08:22.729186  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:08:22.729223  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:08:22.729254  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:08:22.729313  132693 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:22.730259  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:08:22.786992  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:08:22.813486  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:08:22.840236  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:08:22.870078  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1210 01:08:22.896484  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 01:08:22.917547  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:08:22.940550  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/embed-certs-274758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 01:08:22.964784  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:08:22.987389  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:08:23.009860  132693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:08:23.032300  132693 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:08:23.048611  132693 ssh_runner.go:195] Run: openssl version
	I1210 01:08:23.053927  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:08:23.064731  132693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:08:23.068872  132693 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:08:23.068917  132693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:08:23.074207  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:08:23.085278  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:08:23.096087  132693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:23.100106  132693 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:23.100155  132693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:23.105408  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:08:23.114862  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:08:23.124112  132693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:08:23.127915  132693 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:08:23.127958  132693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:08:23.132972  132693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:08:23.142672  132693 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:08:23.146554  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:08:23.152071  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:08:23.157606  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:08:23.162974  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:08:23.168059  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:08:23.173354  132693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:08:23.178612  132693 kubeadm.go:392] StartCluster: {Name:embed-certs-274758 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-274758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:08:23.178733  132693 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:08:23.178788  132693 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:23.214478  132693 cri.go:89] found id: ""
	I1210 01:08:23.214545  132693 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:08:23.223871  132693 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:08:23.223897  132693 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:08:23.223956  132693 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:08:23.232839  132693 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:08:23.233836  132693 kubeconfig.go:125] found "embed-certs-274758" server: "https://192.168.72.76:8443"
	I1210 01:08:23.235958  132693 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:08:23.244484  132693 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.76
	I1210 01:08:23.244517  132693 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:08:23.244529  132693 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:08:23.244578  132693 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:23.282997  132693 cri.go:89] found id: ""
	I1210 01:08:23.283063  132693 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:08:23.298971  132693 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:08:23.307664  132693 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:08:23.307690  132693 kubeadm.go:157] found existing configuration files:
	
	I1210 01:08:23.307739  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:08:23.316208  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:08:23.316259  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:08:23.324410  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:08:23.332254  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:08:23.332303  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:08:23.340482  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:08:23.348584  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:08:23.348636  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:08:23.356760  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:08:23.364508  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:08:23.364564  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:08:23.372644  132693 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:08:23.380791  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:23.481384  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.558104  132693 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.076675674s)
	I1210 01:08:24.558155  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.743002  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.812833  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:24.910903  132693 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:08:24.911007  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:25.411815  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:25.911457  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:26.411340  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:25.276751  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:25.277027  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:25.277058  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:25.276996  134112 retry.go:31] will retry after 1.531276871s: waiting for machine to come up
	I1210 01:08:26.809860  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:26.810332  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:26.810365  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:26.810270  134112 retry.go:31] will retry after 2.029725217s: waiting for machine to come up
	I1210 01:08:28.842419  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:28.842945  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:28.842979  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:28.842895  134112 retry.go:31] will retry after 2.777752063s: waiting for machine to come up
	I1210 01:08:26.911681  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:26.925244  132693 api_server.go:72] duration metric: took 2.014341005s to wait for apiserver process to appear ...
	I1210 01:08:26.925276  132693 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:08:26.925307  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:29.461167  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:08:29.461199  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:08:29.461221  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:29.490907  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:08:29.490935  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:08:29.925947  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:29.938161  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:08:29.938197  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:08:30.425822  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:30.448700  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:08:30.448741  132693 api_server.go:103] status: https://192.168.72.76:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:08:30.926368  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:08:30.930770  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 200:
	ok
	I1210 01:08:30.936664  132693 api_server.go:141] control plane version: v1.31.2
	I1210 01:08:30.936706  132693 api_server.go:131] duration metric: took 4.011421056s to wait for apiserver health ...
	I1210 01:08:30.936719  132693 cni.go:84] Creating CNI manager for ""
	I1210 01:08:30.936731  132693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:08:30.938509  132693 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:08:30.939651  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:08:30.949390  132693 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:08:30.973739  132693 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:08:30.988397  132693 system_pods.go:59] 8 kube-system pods found
	I1210 01:08:30.988441  132693 system_pods.go:61] "coredns-7c65d6cfc9-g98k2" [4358eb5a-fa28-405d-b6a4-66d232c1b060] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 01:08:30.988451  132693 system_pods.go:61] "etcd-embed-certs-274758" [11343776-d268-428f-9af8-4d20e4c1dda4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 01:08:30.988461  132693 system_pods.go:61] "kube-apiserver-embed-certs-274758" [c60d7a8e-e029-47ec-8f9d-5531aaeeb595] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 01:08:30.988471  132693 system_pods.go:61] "kube-controller-manager-embed-certs-274758" [53c0e257-c3c1-410b-8ce5-8350530160c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 01:08:30.988478  132693 system_pods.go:61] "kube-proxy-d29zg" [cbf2dba9-1c85-4e21-bf0b-01cf3fcd00df] Running
	I1210 01:08:30.988503  132693 system_pods.go:61] "kube-scheduler-embed-certs-274758" [6ecaa7c9-f7b6-450d-941c-8ccf582af275] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 01:08:30.988516  132693 system_pods.go:61] "metrics-server-6867b74b74-mhxtf" [2874a85a-c957-4056-b60e-be170f3c1ab2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:08:30.988527  132693 system_pods.go:61] "storage-provisioner" [7e2b93e2-0f25-4bb1-bca6-02a8ea5336ed] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 01:08:30.988539  132693 system_pods.go:74] duration metric: took 14.779044ms to wait for pod list to return data ...
	I1210 01:08:30.988567  132693 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:08:30.993600  132693 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:08:30.993632  132693 node_conditions.go:123] node cpu capacity is 2
	I1210 01:08:30.993652  132693 node_conditions.go:105] duration metric: took 5.074866ms to run NodePressure ...
	I1210 01:08:30.993680  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:31.251140  132693 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 01:08:31.254339  132693 kubeadm.go:739] kubelet initialised
	I1210 01:08:31.254358  132693 kubeadm.go:740] duration metric: took 3.193934ms waiting for restarted kubelet to initialise ...
	I1210 01:08:31.254367  132693 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:08:31.259628  132693 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.264379  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.264406  132693 pod_ready.go:82] duration metric: took 4.746678ms for pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.264417  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "coredns-7c65d6cfc9-g98k2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.264434  132693 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.268773  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "etcd-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.268794  132693 pod_ready.go:82] duration metric: took 4.345772ms for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.268804  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "etcd-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.268812  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.272890  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.272911  132693 pod_ready.go:82] duration metric: took 4.087379ms for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.272921  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.272929  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.377990  132693 pod_ready.go:98] node "embed-certs-274758" hosting pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.378020  132693 pod_ready.go:82] duration metric: took 105.077792ms for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	E1210 01:08:31.378033  132693 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-274758" hosting pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-274758" has status "Ready":"False"
	I1210 01:08:31.378041  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-d29zg" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.777563  132693 pod_ready.go:93] pod "kube-proxy-d29zg" in "kube-system" namespace has status "Ready":"True"
	I1210 01:08:31.777584  132693 pod_ready.go:82] duration metric: took 399.533068ms for pod "kube-proxy-d29zg" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.777598  132693 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:31.623742  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:31.624253  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | unable to find current IP address of domain old-k8s-version-094470 in network mk-old-k8s-version-094470
	I1210 01:08:31.624289  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | I1210 01:08:31.624189  134112 retry.go:31] will retry after 3.852910592s: waiting for machine to come up
	I1210 01:08:36.766538  133282 start.go:364] duration metric: took 3m54.06534367s to acquireMachinesLock for "default-k8s-diff-port-901295"
	I1210 01:08:36.766623  133282 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:08:36.766636  133282 fix.go:54] fixHost starting: 
	I1210 01:08:36.767069  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:08:36.767139  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:08:36.785475  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41443
	I1210 01:08:36.786023  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:08:36.786614  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:08:36.786640  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:08:36.786956  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:08:36.787147  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:36.787295  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:08:36.788719  133282 fix.go:112] recreateIfNeeded on default-k8s-diff-port-901295: state=Stopped err=<nil>
	I1210 01:08:36.788745  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	W1210 01:08:36.788889  133282 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:08:36.791479  133282 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-901295" ...
	I1210 01:08:33.784092  132693 pod_ready.go:103] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:35.784732  132693 pod_ready.go:103] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:36.792712  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Start
	I1210 01:08:36.792883  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Ensuring networks are active...
	I1210 01:08:36.793559  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Ensuring network default is active
	I1210 01:08:36.793891  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Ensuring network mk-default-k8s-diff-port-901295 is active
	I1210 01:08:36.794354  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Getting domain xml...
	I1210 01:08:36.795038  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Creating domain...
	I1210 01:08:35.480373  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.480901  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has current primary IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.480926  133241 main.go:141] libmachine: (old-k8s-version-094470) Found IP for machine: 192.168.61.11
	I1210 01:08:35.480955  133241 main.go:141] libmachine: (old-k8s-version-094470) Reserving static IP address...
	I1210 01:08:35.481323  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "old-k8s-version-094470", mac: "52:54:00:00:f3:52", ip: "192.168.61.11"} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.481352  133241 main.go:141] libmachine: (old-k8s-version-094470) Reserved static IP address: 192.168.61.11
	I1210 01:08:35.481370  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | skip adding static IP to network mk-old-k8s-version-094470 - found existing host DHCP lease matching {name: "old-k8s-version-094470", mac: "52:54:00:00:f3:52", ip: "192.168.61.11"}
	I1210 01:08:35.481392  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | Getting to WaitForSSH function...
	I1210 01:08:35.481408  133241 main.go:141] libmachine: (old-k8s-version-094470) Waiting for SSH to be available...
	I1210 01:08:35.483785  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.484269  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.484314  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.484458  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | Using SSH client type: external
	I1210 01:08:35.484493  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa (-rw-------)
	I1210 01:08:35.484526  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:08:35.484548  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | About to run SSH command:
	I1210 01:08:35.484557  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | exit 0
	I1210 01:08:35.610216  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | SSH cmd err, output: <nil>: 
	I1210 01:08:35.610554  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetConfigRaw
	I1210 01:08:35.611179  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:35.613811  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.614184  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.614221  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.614448  133241 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/config.json ...
	I1210 01:08:35.614659  133241 machine.go:93] provisionDockerMachine start ...
	I1210 01:08:35.614681  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:35.614861  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.616965  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.617478  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.617507  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.617606  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:35.617741  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.617880  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.617993  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:35.618166  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:35.618416  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:35.618431  133241 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:08:35.730293  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:08:35.730326  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 01:08:35.730614  133241 buildroot.go:166] provisioning hostname "old-k8s-version-094470"
	I1210 01:08:35.730647  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 01:08:35.730902  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.733604  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.733943  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.733963  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.734110  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:35.734290  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.734436  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.734589  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:35.734737  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:35.734921  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:35.734937  133241 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-094470 && echo "old-k8s-version-094470" | sudo tee /etc/hostname
	I1210 01:08:35.856219  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-094470
	
	I1210 01:08:35.856272  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.859777  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.860157  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.860194  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.860364  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:35.860590  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.860808  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:35.860948  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:35.861145  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:35.861370  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:35.861391  133241 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-094470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-094470/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-094470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:08:35.984487  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:08:35.984523  133241 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:08:35.984571  133241 buildroot.go:174] setting up certificates
	I1210 01:08:35.984585  133241 provision.go:84] configureAuth start
	I1210 01:08:35.984596  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetMachineName
	I1210 01:08:35.984888  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:35.987515  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.987891  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.987920  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.988078  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:35.990428  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.990806  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:35.990838  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:35.991028  133241 provision.go:143] copyHostCerts
	I1210 01:08:35.991108  133241 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:08:35.991125  133241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:08:35.991208  133241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:08:35.991378  133241 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:08:35.991396  133241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:08:35.991436  133241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:08:35.991548  133241 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:08:35.991560  133241 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:08:35.991593  133241 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:08:35.991684  133241 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-094470 san=[127.0.0.1 192.168.61.11 localhost minikube old-k8s-version-094470]
	I1210 01:08:36.166767  133241 provision.go:177] copyRemoteCerts
	I1210 01:08:36.166825  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:08:36.166872  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.169777  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.170166  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.170196  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.170452  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.170662  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.170837  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.170985  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.255600  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:08:36.277974  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1210 01:08:36.299608  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 01:08:36.320325  133241 provision.go:87] duration metric: took 335.730286ms to configureAuth
	I1210 01:08:36.320346  133241 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:08:36.320502  133241 config.go:182] Loaded profile config "old-k8s-version-094470": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1210 01:08:36.320572  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.323358  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.323810  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.323836  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.324012  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.324213  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.324351  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.324479  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.324608  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:36.324773  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:36.324789  133241 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:08:36.538020  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:08:36.538052  133241 machine.go:96] duration metric: took 923.37742ms to provisionDockerMachine
	I1210 01:08:36.538065  133241 start.go:293] postStartSetup for "old-k8s-version-094470" (driver="kvm2")
	I1210 01:08:36.538075  133241 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:08:36.538092  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.538437  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:08:36.538473  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.540836  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.541187  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.541229  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.541400  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.541594  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.541728  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.541852  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.623740  133241 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:08:36.627323  133241 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:08:36.627343  133241 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:08:36.627405  133241 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:08:36.627487  133241 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:08:36.627568  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:08:36.635720  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:36.656793  133241 start.go:296] duration metric: took 118.715633ms for postStartSetup
	I1210 01:08:36.656832  133241 fix.go:56] duration metric: took 19.077955657s for fixHost
	I1210 01:08:36.656853  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.659288  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.659586  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.659618  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.659772  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.659961  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.660132  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.660250  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.660391  133241 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:36.660552  133241 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.11 22 <nil> <nil>}
	I1210 01:08:36.660562  133241 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:08:36.766355  133241 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792916.738645658
	
	I1210 01:08:36.766375  133241 fix.go:216] guest clock: 1733792916.738645658
	I1210 01:08:36.766382  133241 fix.go:229] Guest: 2024-12-10 01:08:36.738645658 +0000 UTC Remote: 2024-12-10 01:08:36.656836618 +0000 UTC m=+237.074026661 (delta=81.80904ms)
	I1210 01:08:36.766420  133241 fix.go:200] guest clock delta is within tolerance: 81.80904ms
	I1210 01:08:36.766429  133241 start.go:83] releasing machines lock for "old-k8s-version-094470", held for 19.187587757s
	I1210 01:08:36.766461  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.766761  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:36.769758  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.770129  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.770150  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.770309  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.770818  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.770992  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .DriverName
	I1210 01:08:36.771090  133241 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:08:36.771157  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.771182  133241 ssh_runner.go:195] Run: cat /version.json
	I1210 01:08:36.771203  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHHostname
	I1210 01:08:36.773923  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774103  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774272  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.774292  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774434  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.774545  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:36.774585  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:36.774616  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.774728  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHPort
	I1210 01:08:36.774817  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.774843  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHKeyPath
	I1210 01:08:36.774975  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.775004  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetSSHUsername
	I1210 01:08:36.775148  133241 sshutil.go:53] new ssh client: &{IP:192.168.61.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/old-k8s-version-094470/id_rsa Username:docker}
	I1210 01:08:36.875634  133241 ssh_runner.go:195] Run: systemctl --version
	I1210 01:08:36.880774  133241 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:08:37.023282  133241 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:08:37.029380  133241 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:08:37.029436  133241 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:08:37.044071  133241 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:08:37.044093  133241 start.go:495] detecting cgroup driver to use...
	I1210 01:08:37.044157  133241 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:08:37.058626  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:08:37.070607  133241 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:08:37.070659  133241 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:08:37.086913  133241 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:08:37.102676  133241 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:08:37.221862  133241 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:08:37.373086  133241 docker.go:233] disabling docker service ...
	I1210 01:08:37.373166  133241 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:08:37.386711  133241 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:08:37.399414  133241 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:08:37.546237  133241 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:08:37.660681  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:08:37.673736  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:08:37.690107  133241 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1210 01:08:37.690180  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.700871  133241 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:08:37.700920  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.711545  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.722078  133241 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:37.732603  133241 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:08:37.743617  133241 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:08:37.753641  133241 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:08:37.753699  133241 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:08:37.765737  133241 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:08:37.774173  133241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:37.891188  133241 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:08:37.983170  133241 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:08:37.983248  133241 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:08:37.987987  133241 start.go:563] Will wait 60s for crictl version
	I1210 01:08:37.988049  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:37.993150  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:08:38.045191  133241 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:08:38.045281  133241 ssh_runner.go:195] Run: crio --version
	I1210 01:08:38.071768  133241 ssh_runner.go:195] Run: crio --version
	I1210 01:08:38.100869  133241 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1210 01:08:38.102141  133241 main.go:141] libmachine: (old-k8s-version-094470) Calling .GetIP
	I1210 01:08:38.104790  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:38.105112  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:f3:52", ip: ""} in network mk-old-k8s-version-094470: {Iface:virbr3 ExpiryTime:2024-12-10 01:58:40 +0000 UTC Type:0 Mac:52:54:00:00:f3:52 Iaid: IPaddr:192.168.61.11 Prefix:24 Hostname:old-k8s-version-094470 Clientid:01:52:54:00:00:f3:52}
	I1210 01:08:38.105143  133241 main.go:141] libmachine: (old-k8s-version-094470) DBG | domain old-k8s-version-094470 has defined IP address 192.168.61.11 and MAC address 52:54:00:00:f3:52 in network mk-old-k8s-version-094470
	I1210 01:08:38.105337  133241 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1210 01:08:38.109454  133241 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:38.120925  133241 kubeadm.go:883] updating cluster {Name:old-k8s-version-094470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:08:38.121060  133241 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1210 01:08:38.121130  133241 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:38.169400  133241 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1210 01:08:38.169462  133241 ssh_runner.go:195] Run: which lz4
	I1210 01:08:38.172973  133241 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 01:08:38.176684  133241 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 01:08:38.176715  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1210 01:08:38.285566  132693 pod_ready.go:103] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:38.784437  132693 pod_ready.go:93] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:08:38.784470  132693 pod_ready.go:82] duration metric: took 7.006865777s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:38.784480  132693 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace to be "Ready" ...
	I1210 01:08:40.791489  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:38.076463  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting to get IP...
	I1210 01:08:38.077256  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.077650  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.077706  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:38.077616  134254 retry.go:31] will retry after 287.089061ms: waiting for machine to come up
	I1210 01:08:38.366347  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.366906  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.366937  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:38.366866  134254 retry.go:31] will retry after 359.654145ms: waiting for machine to come up
	I1210 01:08:38.728592  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.729111  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:38.729144  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:38.729048  134254 retry.go:31] will retry after 299.617496ms: waiting for machine to come up
	I1210 01:08:39.030785  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.031359  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.031382  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:39.031312  134254 retry.go:31] will retry after 586.950887ms: waiting for machine to come up
	I1210 01:08:39.620247  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.620872  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:39.620903  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:39.620802  134254 retry.go:31] will retry after 623.103267ms: waiting for machine to come up
	I1210 01:08:40.245322  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.245640  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.245669  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:40.245600  134254 retry.go:31] will retry after 712.603102ms: waiting for machine to come up
	I1210 01:08:40.960316  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.960862  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:40.960892  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:40.960806  134254 retry.go:31] will retry after 999.356089ms: waiting for machine to come up
	I1210 01:08:41.961395  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:41.961902  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:41.961929  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:41.961862  134254 retry.go:31] will retry after 1.050049361s: waiting for machine to come up
	I1210 01:08:39.654620  133241 crio.go:462] duration metric: took 1.481673499s to copy over tarball
	I1210 01:08:39.654705  133241 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 01:08:42.473447  133241 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.818699717s)
	I1210 01:08:42.473486  133241 crio.go:469] duration metric: took 2.818833041s to extract the tarball
	I1210 01:08:42.473496  133241 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 01:08:42.514635  133241 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:42.546161  133241 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1210 01:08:42.546204  133241 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 01:08:42.546276  133241 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:08:42.546315  133241 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.546339  133241 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.546344  133241 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.546347  133241 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.546306  133241 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1210 01:08:42.546372  133241 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.546315  133241 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.548134  133241 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.548150  133241 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1210 01:08:42.548149  133241 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.548162  133241 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:08:42.548134  133241 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.548135  133241 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.548138  133241 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.548326  133241 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.700402  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.706096  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.716669  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.717025  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.723380  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.727890  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.740867  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1210 01:08:42.775300  133241 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1210 01:08:42.775345  133241 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.775393  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.827802  133241 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1210 01:08:42.827855  133241 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.827873  133241 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1210 01:08:42.827906  133241 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.827936  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.827953  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.851952  133241 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1210 01:08:42.851998  133241 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.852063  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872369  133241 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1210 01:08:42.872408  133241 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.872446  133241 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1210 01:08:42.872479  133241 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1210 01:08:42.872489  133241 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.872497  133241 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1210 01:08:42.872516  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.872535  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872535  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872458  133241 ssh_runner.go:195] Run: which crictl
	I1210 01:08:42.872578  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.872638  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:42.872672  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.952963  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:42.952964  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 01:08:42.956464  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 01:08:42.956535  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:42.956580  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:42.956614  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:42.956681  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:43.035636  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 01:08:43.086938  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:43.087032  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1210 01:08:43.104765  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:43.104844  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1210 01:08:43.104891  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1210 01:08:43.109871  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1210 01:08:43.122137  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1210 01:08:43.193838  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1210 01:08:43.256301  133241 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1210 01:08:43.256342  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1210 01:08:43.256431  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1210 01:08:43.258819  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1210 01:08:43.258928  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1210 01:08:43.259011  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1210 01:08:43.281411  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1210 01:08:43.300319  133241 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1210 01:08:43.334327  133241 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:08:43.478183  133241 cache_images.go:92] duration metric: took 931.957836ms to LoadCachedImages
	W1210 01:08:43.478292  133241 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1210 01:08:43.478310  133241 kubeadm.go:934] updating node { 192.168.61.11 8443 v1.20.0 crio true true} ...
	I1210 01:08:43.478501  133241 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-094470 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:08:43.478610  133241 ssh_runner.go:195] Run: crio config
	I1210 01:08:43.523627  133241 cni.go:84] Creating CNI manager for ""
	I1210 01:08:43.523651  133241 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:08:43.523660  133241 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:08:43.523680  133241 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.11 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-094470 NodeName:old-k8s-version-094470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1210 01:08:43.523872  133241 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-094470"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:08:43.523947  133241 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1210 01:08:43.534926  133241 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:08:43.535015  133241 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:08:43.544420  133241 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1210 01:08:43.561582  133241 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:08:43.578427  133241 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1210 01:08:43.595593  133241 ssh_runner.go:195] Run: grep 192.168.61.11	control-plane.minikube.internal$ /etc/hosts
	I1210 01:08:43.599137  133241 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:43.610483  133241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:43.750543  133241 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:08:43.766573  133241 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470 for IP: 192.168.61.11
	I1210 01:08:43.766599  133241 certs.go:194] generating shared ca certs ...
	I1210 01:08:43.766628  133241 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:08:43.766828  133241 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:08:43.766881  133241 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:08:43.766897  133241 certs.go:256] generating profile certs ...
	I1210 01:08:43.767022  133241 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.key
	I1210 01:08:43.767097  133241 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.key.11e7a196
	I1210 01:08:43.767158  133241 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.key
	I1210 01:08:43.767318  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:08:43.767359  133241 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:08:43.767391  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:08:43.767428  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:08:43.767461  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:08:43.767502  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:08:43.767554  133241 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:43.768599  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:08:43.825215  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:08:43.852218  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:08:43.888256  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:08:43.921633  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1210 01:08:43.954815  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 01:08:43.986660  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:08:44.009065  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 01:08:44.030476  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:08:44.053232  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:08:44.078371  133241 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:08:44.100076  133241 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:08:44.115731  133241 ssh_runner.go:195] Run: openssl version
	I1210 01:08:44.121192  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:08:44.130554  133241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:08:44.134639  133241 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:08:44.134697  133241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:08:44.140323  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:08:44.150593  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:08:44.160638  133241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:44.165053  133241 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:44.165121  133241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:08:44.170391  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:08:44.180113  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:08:44.189938  133241 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:08:44.193880  133241 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:08:44.193931  133241 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:08:44.199419  133241 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:08:44.209346  133241 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:08:44.213474  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:08:44.218965  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:08:44.224344  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:08:44.229835  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:08:44.235365  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:08:44.240697  133241 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:08:44.245999  133241 kubeadm.go:392] StartCluster: {Name:old-k8s-version-094470 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-094470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.11 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:08:44.246102  133241 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:08:44.246164  133241 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:44.287050  133241 cri.go:89] found id: ""
	I1210 01:08:44.287167  133241 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:08:44.297028  133241 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:08:44.297044  133241 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:08:44.297092  133241 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:08:44.306118  133241 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:08:44.307143  133241 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-094470" does not appear in /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:08:44.307777  133241 kubeconfig.go:62] /home/jenkins/minikube-integration/20062-79135/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-094470" cluster setting kubeconfig missing "old-k8s-version-094470" context setting]
	I1210 01:08:44.308663  133241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:08:44.394164  133241 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:08:44.406683  133241 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.11
	I1210 01:08:44.406723  133241 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:08:44.406739  133241 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:08:44.406799  133241 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:08:44.444917  133241 cri.go:89] found id: ""
	I1210 01:08:44.444995  133241 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:08:44.465693  133241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:08:44.475399  133241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:08:44.475424  133241 kubeadm.go:157] found existing configuration files:
	
	I1210 01:08:44.475482  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:08:44.483802  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:08:44.483844  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:08:44.492395  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:08:44.501080  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:08:44.501141  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:08:44.509973  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:08:44.518103  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:08:44.518176  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:08:44.527145  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:08:44.535124  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:08:44.535179  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:08:44.543773  133241 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:08:44.552533  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:42.791894  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:45.934242  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:43.013971  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:43.014430  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:43.014467  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:43.014369  134254 retry.go:31] will retry after 1.273602138s: waiting for machine to come up
	I1210 01:08:44.289131  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:44.289686  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:44.289720  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:44.289616  134254 retry.go:31] will retry after 1.911761795s: waiting for machine to come up
	I1210 01:08:46.203851  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:46.204263  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:46.204321  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:46.204199  134254 retry.go:31] will retry after 2.653257729s: waiting for machine to come up
	I1210 01:08:44.667527  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.368529  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.572674  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.671006  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:45.759483  133241 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:08:45.759588  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:46.260599  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:46.759851  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:47.260403  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:47.760555  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:48.259665  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:48.760390  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:49.260450  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:48.292324  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:50.789665  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:48.859690  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:48.860078  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:48.860108  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:48.860029  134254 retry.go:31] will retry after 3.186060231s: waiting for machine to come up
	I1210 01:08:52.048071  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:52.048524  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | unable to find current IP address of domain default-k8s-diff-port-901295 in network mk-default-k8s-diff-port-901295
	I1210 01:08:52.048554  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | I1210 01:08:52.048478  134254 retry.go:31] will retry after 2.823038983s: waiting for machine to come up
	I1210 01:08:49.759795  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:50.260493  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:50.760146  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:51.259783  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:51.760554  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:52.260543  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:52.760452  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:53.260523  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:53.759677  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:54.259750  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:56.158844  132605 start.go:364] duration metric: took 51.38781342s to acquireMachinesLock for "no-preload-584179"
	I1210 01:08:56.158913  132605 start.go:96] Skipping create...Using existing machine configuration
	I1210 01:08:56.158923  132605 fix.go:54] fixHost starting: 
	I1210 01:08:56.159339  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:08:56.159381  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:08:56.178552  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34315
	I1210 01:08:56.178997  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:08:56.179471  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:08:56.179497  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:08:56.179803  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:08:56.179977  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:08:56.180119  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:08:56.181496  132605 fix.go:112] recreateIfNeeded on no-preload-584179: state=Stopped err=<nil>
	I1210 01:08:56.181521  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	W1210 01:08:56.181661  132605 fix.go:138] unexpected machine state, will restart: <nil>
	I1210 01:08:56.183508  132605 out.go:177] * Restarting existing kvm2 VM for "no-preload-584179" ...
	I1210 01:08:52.790210  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:54.790515  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:56.184725  132605 main.go:141] libmachine: (no-preload-584179) Calling .Start
	I1210 01:08:56.184883  132605 main.go:141] libmachine: (no-preload-584179) Ensuring networks are active...
	I1210 01:08:56.185680  132605 main.go:141] libmachine: (no-preload-584179) Ensuring network default is active
	I1210 01:08:56.186043  132605 main.go:141] libmachine: (no-preload-584179) Ensuring network mk-no-preload-584179 is active
	I1210 01:08:56.186427  132605 main.go:141] libmachine: (no-preload-584179) Getting domain xml...
	I1210 01:08:56.187126  132605 main.go:141] libmachine: (no-preload-584179) Creating domain...
	I1210 01:08:54.875474  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.875880  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has current primary IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.875902  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Found IP for machine: 192.168.39.193
	I1210 01:08:54.875918  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Reserving static IP address...
	I1210 01:08:54.876379  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-901295", mac: "52:54:00:f7:2f:3d", ip: "192.168.39.193"} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:54.876411  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Reserved static IP address: 192.168.39.193
	I1210 01:08:54.876434  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | skip adding static IP to network mk-default-k8s-diff-port-901295 - found existing host DHCP lease matching {name: "default-k8s-diff-port-901295", mac: "52:54:00:f7:2f:3d", ip: "192.168.39.193"}
	I1210 01:08:54.876456  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Getting to WaitForSSH function...
	I1210 01:08:54.876473  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Waiting for SSH to be available...
	I1210 01:08:54.878454  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.878758  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:54.878787  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:54.878940  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Using SSH client type: external
	I1210 01:08:54.878969  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa (-rw-------)
	I1210 01:08:54.878993  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.193 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:08:54.879003  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | About to run SSH command:
	I1210 01:08:54.879011  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | exit 0
	I1210 01:08:55.006046  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | SSH cmd err, output: <nil>: 
	I1210 01:08:55.006394  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetConfigRaw
	I1210 01:08:55.007100  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:55.009429  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.009753  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.009803  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.010054  133282 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/config.json ...
	I1210 01:08:55.010278  133282 machine.go:93] provisionDockerMachine start ...
	I1210 01:08:55.010302  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:55.010513  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.012899  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.013198  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.013248  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.013340  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.013509  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.013643  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.013726  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.013879  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.014070  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.014081  133282 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:08:55.126262  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:08:55.126294  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetMachineName
	I1210 01:08:55.126547  133282 buildroot.go:166] provisioning hostname "default-k8s-diff-port-901295"
	I1210 01:08:55.126592  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetMachineName
	I1210 01:08:55.126756  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.129397  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.129769  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.129798  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.129921  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.130071  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.130187  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.130279  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.130380  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.130545  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.130572  133282 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-901295 && echo "default-k8s-diff-port-901295" | sudo tee /etc/hostname
	I1210 01:08:55.256829  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-901295
	
	I1210 01:08:55.256857  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.259599  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.259977  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.260006  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.260257  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.260456  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.260645  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.260795  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.260996  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.261212  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.261239  133282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-901295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-901295/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-901295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:08:55.387808  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:08:55.387837  133282 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:08:55.387872  133282 buildroot.go:174] setting up certificates
	I1210 01:08:55.387883  133282 provision.go:84] configureAuth start
	I1210 01:08:55.387897  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetMachineName
	I1210 01:08:55.388193  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:55.391297  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.391649  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.391683  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.391799  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.393859  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.394150  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.394176  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.394272  133282 provision.go:143] copyHostCerts
	I1210 01:08:55.394336  133282 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:08:55.394353  133282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:08:55.394411  133282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:08:55.394501  133282 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:08:55.394508  133282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:08:55.394530  133282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:08:55.394615  133282 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:08:55.394624  133282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:08:55.394643  133282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:08:55.394693  133282 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-901295 san=[127.0.0.1 192.168.39.193 default-k8s-diff-port-901295 localhost minikube]
	I1210 01:08:55.502253  133282 provision.go:177] copyRemoteCerts
	I1210 01:08:55.502313  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:08:55.502341  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.504919  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.505216  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.505252  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.505425  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.505613  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.505749  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.505932  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:55.593242  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:08:55.616378  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1210 01:08:55.638786  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1210 01:08:55.660268  133282 provision.go:87] duration metric: took 272.369019ms to configureAuth
	I1210 01:08:55.660293  133282 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:08:55.660506  133282 config.go:182] Loaded profile config "default-k8s-diff-port-901295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:08:55.660597  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.662964  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.663283  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.663312  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.663461  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.663656  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.663820  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.663944  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.664091  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:55.664330  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:55.664354  133282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:08:55.918356  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:08:55.918389  133282 machine.go:96] duration metric: took 908.095325ms to provisionDockerMachine
	I1210 01:08:55.918402  133282 start.go:293] postStartSetup for "default-k8s-diff-port-901295" (driver="kvm2")
	I1210 01:08:55.918415  133282 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:08:55.918450  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:55.918790  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:08:55.918823  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:55.921575  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.921897  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:55.921929  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:55.922026  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:55.922205  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:55.922375  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:55.922485  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:56.008442  133282 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:08:56.012149  133282 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:08:56.012165  133282 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:08:56.012239  133282 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:08:56.012325  133282 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:08:56.012428  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:08:56.021144  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:08:56.042869  133282 start.go:296] duration metric: took 124.452091ms for postStartSetup
	I1210 01:08:56.042914  133282 fix.go:56] duration metric: took 19.276278483s for fixHost
	I1210 01:08:56.042940  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:56.045280  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.045612  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.045644  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.045845  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:56.046002  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.046123  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.046224  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:56.046353  133282 main.go:141] libmachine: Using SSH client type: native
	I1210 01:08:56.046530  133282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.193 22 <nil> <nil>}
	I1210 01:08:56.046541  133282 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:08:56.158690  133282 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792936.125620375
	
	I1210 01:08:56.158714  133282 fix.go:216] guest clock: 1733792936.125620375
	I1210 01:08:56.158722  133282 fix.go:229] Guest: 2024-12-10 01:08:56.125620375 +0000 UTC Remote: 2024-12-10 01:08:56.042918319 +0000 UTC m=+253.475376365 (delta=82.702056ms)
	I1210 01:08:56.158741  133282 fix.go:200] guest clock delta is within tolerance: 82.702056ms
	I1210 01:08:56.158746  133282 start.go:83] releasing machines lock for "default-k8s-diff-port-901295", held for 19.392149024s
	I1210 01:08:56.158769  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.159017  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:56.161998  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.162321  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.162350  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.162541  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.163022  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.163197  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:08:56.163296  133282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:08:56.163346  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:56.163449  133282 ssh_runner.go:195] Run: cat /version.json
	I1210 01:08:56.163481  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:08:56.166071  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.166443  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.166475  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.166500  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.166750  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:56.166897  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:56.166920  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.166929  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:56.167083  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:56.167089  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:08:56.167255  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:08:56.167258  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:56.167400  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:08:56.167529  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:08:56.273144  133282 ssh_runner.go:195] Run: systemctl --version
	I1210 01:08:56.278678  133282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:08:56.423921  133282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:08:56.429467  133282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:08:56.429537  133282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:08:56.443900  133282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:08:56.443927  133282 start.go:495] detecting cgroup driver to use...
	I1210 01:08:56.443996  133282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:08:56.458653  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:08:56.471717  133282 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:08:56.471798  133282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:08:56.483960  133282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:08:56.495903  133282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:08:56.604493  133282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:08:56.741771  133282 docker.go:233] disabling docker service ...
	I1210 01:08:56.741846  133282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:08:56.755264  133282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:08:56.767590  133282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:08:56.922151  133282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:08:57.045410  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:08:57.061217  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:08:57.079488  133282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 01:08:57.079552  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.090356  133282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:08:57.090434  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.100784  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.111326  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.120417  133282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:08:57.129871  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.140489  133282 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.157524  133282 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:08:57.167947  133282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:08:57.176904  133282 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:08:57.176947  133282 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:08:57.188925  133282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:08:57.197558  133282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:08:57.319427  133282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:08:57.419493  133282 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:08:57.419570  133282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:08:57.424302  133282 start.go:563] Will wait 60s for crictl version
	I1210 01:08:57.424362  133282 ssh_runner.go:195] Run: which crictl
	I1210 01:08:57.428067  133282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:08:57.468247  133282 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:08:57.468319  133282 ssh_runner.go:195] Run: crio --version
	I1210 01:08:57.497834  133282 ssh_runner.go:195] Run: crio --version
	I1210 01:08:57.527032  133282 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 01:08:57.528284  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetIP
	I1210 01:08:57.531510  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:57.531882  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:08:57.531908  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:08:57.532178  133282 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1210 01:08:57.536149  133282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:08:57.548081  133282 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:08:57.548221  133282 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:08:57.548283  133282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:08:57.585539  133282 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 01:08:57.585619  133282 ssh_runner.go:195] Run: which lz4
	I1210 01:08:57.590131  133282 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1210 01:08:57.595506  133282 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1210 01:08:57.595534  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1210 01:08:54.760444  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:55.259774  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:55.759929  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:56.260379  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:56.759985  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:57.260495  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:57.759699  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:58.260475  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:58.759732  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:59.260424  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:08:57.291502  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:59.792026  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:01.793182  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:08:57.453911  132605 main.go:141] libmachine: (no-preload-584179) Waiting to get IP...
	I1210 01:08:57.455000  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:57.455393  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:57.455472  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:57.455384  134419 retry.go:31] will retry after 189.932045ms: waiting for machine to come up
	I1210 01:08:57.646978  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:57.647486  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:57.647520  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:57.647418  134419 retry.go:31] will retry after 278.873511ms: waiting for machine to come up
	I1210 01:08:57.928222  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:57.928797  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:57.928837  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:57.928738  134419 retry.go:31] will retry after 468.940412ms: waiting for machine to come up
	I1210 01:08:58.399469  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:58.400105  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:58.400131  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:58.400041  134419 retry.go:31] will retry after 459.796386ms: waiting for machine to come up
	I1210 01:08:58.861581  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:58.862042  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:58.862075  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:58.861985  134419 retry.go:31] will retry after 493.349488ms: waiting for machine to come up
	I1210 01:08:59.356810  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:08:59.357338  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:08:59.357365  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:08:59.357314  134419 retry.go:31] will retry after 736.790492ms: waiting for machine to come up
	I1210 01:09:00.095779  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:00.096246  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:00.096281  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:00.096182  134419 retry.go:31] will retry after 1.059095907s: waiting for machine to come up
	I1210 01:09:01.157286  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:01.157718  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:01.157759  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:01.157656  134419 retry.go:31] will retry after 1.18137171s: waiting for machine to come up
	I1210 01:08:58.835009  133282 crio.go:462] duration metric: took 1.24490918s to copy over tarball
	I1210 01:08:58.835108  133282 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1210 01:09:00.985062  133282 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.149905713s)
	I1210 01:09:00.985097  133282 crio.go:469] duration metric: took 2.150055868s to extract the tarball
	I1210 01:09:00.985108  133282 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1210 01:09:01.032869  133282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:09:01.074578  133282 crio.go:514] all images are preloaded for cri-o runtime.
	I1210 01:09:01.074609  133282 cache_images.go:84] Images are preloaded, skipping loading
	I1210 01:09:01.074618  133282 kubeadm.go:934] updating node { 192.168.39.193 8444 v1.31.2 crio true true} ...
	I1210 01:09:01.074727  133282 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-901295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:09:01.074794  133282 ssh_runner.go:195] Run: crio config
	I1210 01:09:01.133905  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:09:01.133943  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:01.133965  133282 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:09:01.133999  133282 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.193 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-901295 NodeName:default-k8s-diff-port-901295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 01:09:01.134201  133282 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.193
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-901295"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.193"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.193"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:09:01.134264  133282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 01:09:01.147844  133282 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:09:01.147931  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:09:01.160432  133282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1210 01:09:01.180526  133282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:09:01.200698  133282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1210 01:09:01.216799  133282 ssh_runner.go:195] Run: grep 192.168.39.193	control-plane.minikube.internal$ /etc/hosts
	I1210 01:09:01.220381  133282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.193	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:09:01.233079  133282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:01.361483  133282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:09:01.380679  133282 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295 for IP: 192.168.39.193
	I1210 01:09:01.380702  133282 certs.go:194] generating shared ca certs ...
	I1210 01:09:01.380722  133282 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:01.380921  133282 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:09:01.380994  133282 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:09:01.381010  133282 certs.go:256] generating profile certs ...
	I1210 01:09:01.381136  133282 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/client.key
	I1210 01:09:01.381229  133282 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/apiserver.key.b900309b
	I1210 01:09:01.381286  133282 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/proxy-client.key
	I1210 01:09:01.381437  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:09:01.381489  133282 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:09:01.381500  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:09:01.381537  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:09:01.381568  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:09:01.381598  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:09:01.381658  133282 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:09:01.382643  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:09:01.437062  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:09:01.472383  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:09:01.503832  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:09:01.532159  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1210 01:09:01.555926  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1210 01:09:01.578213  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:09:01.599047  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1210 01:09:01.620628  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:09:01.643326  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:09:01.665846  133282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:09:01.688854  133282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:09:01.706519  133282 ssh_runner.go:195] Run: openssl version
	I1210 01:09:01.712053  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:09:01.722297  133282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:09:01.726404  133282 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:09:01.726491  133282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:09:01.731901  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:09:01.745040  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:09:01.758663  133282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:01.763894  133282 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:01.763945  133282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:01.771019  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:09:01.781071  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:09:01.790898  133282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:09:01.795494  133282 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:09:01.795557  133282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:09:01.800996  133282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:09:01.811221  133282 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:09:01.815412  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:09:01.821621  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:09:01.829028  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:09:01.838361  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:09:01.844663  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:09:01.850154  133282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:09:01.855539  133282 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-901295 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-901295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:09:01.855625  133282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:09:01.855663  133282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:01.898021  133282 cri.go:89] found id: ""
	I1210 01:09:01.898095  133282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:09:01.908929  133282 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:09:01.908947  133282 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:09:01.909005  133282 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:09:01.917830  133282 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:09:01.918982  133282 kubeconfig.go:125] found "default-k8s-diff-port-901295" server: "https://192.168.39.193:8444"
	I1210 01:09:01.921394  133282 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:09:01.930263  133282 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.193
	I1210 01:09:01.930291  133282 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:09:01.930304  133282 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:09:01.930352  133282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:01.966094  133282 cri.go:89] found id: ""
	I1210 01:09:01.966195  133282 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:09:01.983212  133282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:09:01.991944  133282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:09:01.991963  133282 kubeadm.go:157] found existing configuration files:
	
	I1210 01:09:01.992011  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 01:09:02.000043  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:09:02.000094  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:09:02.008538  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 01:09:02.016658  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:09:02.016718  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:09:02.025191  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 01:09:02.033198  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:09:02.033235  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:09:02.041713  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 01:09:02.049752  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:09:02.049801  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:09:02.058162  133282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:09:02.067001  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:02.178210  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:08:59.760246  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:00.260582  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:00.760701  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:01.259686  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:01.759889  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:02.260232  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:02.759769  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:03.259935  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:03.760670  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.260443  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.289731  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:06.291608  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:02.340685  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:02.341201  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:02.341233  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:02.341148  134419 retry.go:31] will retry after 1.149002375s: waiting for machine to come up
	I1210 01:09:03.491439  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:03.491777  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:03.491803  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:03.491742  134419 retry.go:31] will retry after 2.260301884s: waiting for machine to come up
	I1210 01:09:05.753701  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:05.754207  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:05.754245  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:05.754151  134419 retry.go:31] will retry after 2.19021466s: waiting for machine to come up
	I1210 01:09:03.022068  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:03.230465  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:03.288423  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:03.380544  133282 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:09:03.380653  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:03.881388  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.381638  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:04.881652  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.380981  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.394784  133282 api_server.go:72] duration metric: took 2.014238708s to wait for apiserver process to appear ...
	I1210 01:09:05.394817  133282 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:09:05.394854  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:07.865790  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:07.865818  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:07.865831  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:07.881775  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:07.881807  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:07.894896  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:07.914874  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:07.914905  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:08.395143  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:08.404338  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:08.404370  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:08.895743  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:08.906401  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:08.906439  133282 api_server.go:103] status: https://192.168.39.193:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:09.394905  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:09:09.400326  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 200:
	ok
	I1210 01:09:09.411040  133282 api_server.go:141] control plane version: v1.31.2
	I1210 01:09:09.411080  133282 api_server.go:131] duration metric: took 4.016246339s to wait for apiserver health ...
	I1210 01:09:09.411090  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:09:09.411096  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:09.412738  133282 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:09:04.760421  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.260154  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:05.760313  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:06.259902  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:06.760365  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:07.260060  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:07.759720  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:08.260052  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:08.759734  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:09.260736  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:08.291848  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:10.790539  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:07.946992  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:07.947528  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:07.947561  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:07.947474  134419 retry.go:31] will retry after 3.212306699s: waiting for machine to come up
	I1210 01:09:11.163716  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:11.164132  132605 main.go:141] libmachine: (no-preload-584179) DBG | unable to find current IP address of domain no-preload-584179 in network mk-no-preload-584179
	I1210 01:09:11.164163  132605 main.go:141] libmachine: (no-preload-584179) DBG | I1210 01:09:11.164092  134419 retry.go:31] will retry after 3.275164589s: waiting for machine to come up
	I1210 01:09:09.413907  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:09:09.423631  133282 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:09:09.440030  133282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:09:09.449054  133282 system_pods.go:59] 8 kube-system pods found
	I1210 01:09:09.449081  133282 system_pods.go:61] "coredns-7c65d6cfc9-qbdpj" [eec04b43-145a-4cae-9085-185b573be507] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 01:09:09.449088  133282 system_pods.go:61] "etcd-default-k8s-diff-port-901295" [c8c570b0-2e66-4cf5-bed6-20ee655ad679] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 01:09:09.449100  133282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-901295" [42b2ad48-8b92-4ba4-8a14-6c3e6bdec4a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 01:09:09.449116  133282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-901295" [bd2c0e9d-cb31-46a5-b12e-ab70ed05c8e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 01:09:09.449127  133282 system_pods.go:61] "kube-proxy-5szz9" [957bab4d-6329-41b4-9980-aaa17133201e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 01:09:09.449135  133282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-901295" [1729b062-1bfe-447f-b9ed-29813c7f056a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 01:09:09.449144  133282 system_pods.go:61] "metrics-server-6867b74b74-zpj2g" [cdfb5b8e-5b7f-4fc8-8ad8-07ea92f7f737] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:09:09.449150  133282 system_pods.go:61] "storage-provisioner" [342f814b-f510-4a3b-b27d-52ebbdf85275] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 01:09:09.449159  133282 system_pods.go:74] duration metric: took 9.110007ms to wait for pod list to return data ...
	I1210 01:09:09.449168  133282 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:09:09.452778  133282 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:09:09.452806  133282 node_conditions.go:123] node cpu capacity is 2
	I1210 01:09:09.452818  133282 node_conditions.go:105] duration metric: took 3.643268ms to run NodePressure ...
	I1210 01:09:09.452837  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:09.728171  133282 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 01:09:09.732074  133282 kubeadm.go:739] kubelet initialised
	I1210 01:09:09.732096  133282 kubeadm.go:740] duration metric: took 3.900542ms waiting for restarted kubelet to initialise ...
	I1210 01:09:09.732106  133282 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:09.736406  133282 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.740516  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.740534  133282 pod_ready.go:82] duration metric: took 4.104848ms for pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.740543  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "coredns-7c65d6cfc9-qbdpj" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.740549  133282 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.744293  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.744311  133282 pod_ready.go:82] duration metric: took 3.755781ms for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.744321  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.744326  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.748023  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.748045  133282 pod_ready.go:82] duration metric: took 3.712559ms for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.748062  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.748070  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:09.843581  133282 pod_ready.go:98] node "default-k8s-diff-port-901295" hosting pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.843607  133282 pod_ready.go:82] duration metric: took 95.52817ms for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:09.843621  133282 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-901295" hosting pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-901295" has status "Ready":"False"
	I1210 01:09:09.843632  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5szz9" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:10.242986  133282 pod_ready.go:93] pod "kube-proxy-5szz9" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:10.243015  133282 pod_ready.go:82] duration metric: took 399.37468ms for pod "kube-proxy-5szz9" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:10.243025  133282 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:12.249815  133282 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:09.760075  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:10.259970  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:10.760547  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:11.259999  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:11.760315  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:12.260121  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:12.760217  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:13.259996  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:13.760635  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:14.259738  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:13.290686  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:15.792057  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:14.440802  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.441315  132605 main.go:141] libmachine: (no-preload-584179) Found IP for machine: 192.168.50.169
	I1210 01:09:14.441338  132605 main.go:141] libmachine: (no-preload-584179) Reserving static IP address...
	I1210 01:09:14.441355  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has current primary IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.441776  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "no-preload-584179", mac: "52:54:00:94:5e:a7", ip: "192.168.50.169"} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.441830  132605 main.go:141] libmachine: (no-preload-584179) DBG | skip adding static IP to network mk-no-preload-584179 - found existing host DHCP lease matching {name: "no-preload-584179", mac: "52:54:00:94:5e:a7", ip: "192.168.50.169"}
	I1210 01:09:14.441847  132605 main.go:141] libmachine: (no-preload-584179) Reserved static IP address: 192.168.50.169
	I1210 01:09:14.441867  132605 main.go:141] libmachine: (no-preload-584179) Waiting for SSH to be available...
	I1210 01:09:14.441882  132605 main.go:141] libmachine: (no-preload-584179) DBG | Getting to WaitForSSH function...
	I1210 01:09:14.444063  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.444360  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.444397  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.444510  132605 main.go:141] libmachine: (no-preload-584179) DBG | Using SSH client type: external
	I1210 01:09:14.444531  132605 main.go:141] libmachine: (no-preload-584179) DBG | Using SSH private key: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa (-rw-------)
	I1210 01:09:14.444565  132605 main.go:141] libmachine: (no-preload-584179) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1210 01:09:14.444579  132605 main.go:141] libmachine: (no-preload-584179) DBG | About to run SSH command:
	I1210 01:09:14.444594  132605 main.go:141] libmachine: (no-preload-584179) DBG | exit 0
	I1210 01:09:14.571597  132605 main.go:141] libmachine: (no-preload-584179) DBG | SSH cmd err, output: <nil>: 
	I1210 01:09:14.571997  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetConfigRaw
	I1210 01:09:14.572831  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:14.576075  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.576525  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.576559  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.576843  132605 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/config.json ...
	I1210 01:09:14.577023  132605 machine.go:93] provisionDockerMachine start ...
	I1210 01:09:14.577043  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:14.577263  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.579535  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.579894  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.579925  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.580191  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:14.580426  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.580579  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.580742  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:14.580901  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:14.581081  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:14.581092  132605 main.go:141] libmachine: About to run SSH command:
	hostname
	I1210 01:09:14.699453  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1210 01:09:14.699485  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:09:14.699734  132605 buildroot.go:166] provisioning hostname "no-preload-584179"
	I1210 01:09:14.699766  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:09:14.700011  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.703169  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.703570  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.703597  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.703742  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:14.703967  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.704170  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.704395  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:14.704582  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:14.704802  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:14.704825  132605 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-584179 && echo "no-preload-584179" | sudo tee /etc/hostname
	I1210 01:09:14.836216  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-584179
	
	I1210 01:09:14.836259  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.839077  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.839502  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.839536  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.839752  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:14.839958  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.840127  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:14.840304  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:14.840534  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:14.840766  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:14.840793  132605 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-584179' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-584179/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-584179' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1210 01:09:14.965138  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1210 01:09:14.965175  132605 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20062-79135/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-79135/.minikube}
	I1210 01:09:14.965246  132605 buildroot.go:174] setting up certificates
	I1210 01:09:14.965268  132605 provision.go:84] configureAuth start
	I1210 01:09:14.965287  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetMachineName
	I1210 01:09:14.965570  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:14.968666  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.969081  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.969116  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.969264  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:14.971772  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.972144  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:14.972169  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:14.972337  132605 provision.go:143] copyHostCerts
	I1210 01:09:14.972403  132605 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem, removing ...
	I1210 01:09:14.972428  132605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem
	I1210 01:09:14.972492  132605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/ca.pem (1078 bytes)
	I1210 01:09:14.972648  132605 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem, removing ...
	I1210 01:09:14.972663  132605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem
	I1210 01:09:14.972698  132605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/cert.pem (1123 bytes)
	I1210 01:09:14.972790  132605 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem, removing ...
	I1210 01:09:14.972803  132605 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem
	I1210 01:09:14.972836  132605 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-79135/.minikube/key.pem (1675 bytes)
	I1210 01:09:14.972915  132605 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem org=jenkins.no-preload-584179 san=[127.0.0.1 192.168.50.169 localhost minikube no-preload-584179]
	I1210 01:09:15.113000  132605 provision.go:177] copyRemoteCerts
	I1210 01:09:15.113067  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1210 01:09:15.113100  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.115838  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.116216  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.116243  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.116422  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.116590  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.116726  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.116820  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.199896  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1210 01:09:15.225440  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1210 01:09:15.250028  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1210 01:09:15.274086  132605 provision.go:87] duration metric: took 308.801497ms to configureAuth
	I1210 01:09:15.274127  132605 buildroot.go:189] setting minikube options for container-runtime
	I1210 01:09:15.274298  132605 config.go:182] Loaded profile config "no-preload-584179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:09:15.274390  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.277149  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.277509  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.277539  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.277682  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.277842  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.277999  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.278110  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.278260  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:15.278438  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:15.278454  132605 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1210 01:09:15.504997  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1210 01:09:15.505080  132605 machine.go:96] duration metric: took 928.040946ms to provisionDockerMachine
	I1210 01:09:15.505103  132605 start.go:293] postStartSetup for "no-preload-584179" (driver="kvm2")
	I1210 01:09:15.505118  132605 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1210 01:09:15.505150  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.505498  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1210 01:09:15.505532  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.508802  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.509247  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.509324  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.509448  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.509674  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.509840  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.509985  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.597115  132605 ssh_runner.go:195] Run: cat /etc/os-release
	I1210 01:09:15.602107  132605 info.go:137] Remote host: Buildroot 2023.02.9
	I1210 01:09:15.602135  132605 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/addons for local assets ...
	I1210 01:09:15.602226  132605 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-79135/.minikube/files for local assets ...
	I1210 01:09:15.602330  132605 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem -> 862962.pem in /etc/ssl/certs
	I1210 01:09:15.602453  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1210 01:09:15.611320  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:09:15.633173  132605 start.go:296] duration metric: took 128.055577ms for postStartSetup
	I1210 01:09:15.633214  132605 fix.go:56] duration metric: took 19.474291224s for fixHost
	I1210 01:09:15.633234  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.635888  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.636254  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.636298  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.636472  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.636655  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.636827  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.636941  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.637115  132605 main.go:141] libmachine: Using SSH client type: native
	I1210 01:09:15.637284  132605 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.169 22 <nil> <nil>}
	I1210 01:09:15.637295  132605 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1210 01:09:15.746834  132605 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733792955.705138377
	
	I1210 01:09:15.746862  132605 fix.go:216] guest clock: 1733792955.705138377
	I1210 01:09:15.746873  132605 fix.go:229] Guest: 2024-12-10 01:09:15.705138377 +0000 UTC Remote: 2024-12-10 01:09:15.6332178 +0000 UTC m=+353.450037611 (delta=71.920577ms)
	I1210 01:09:15.746899  132605 fix.go:200] guest clock delta is within tolerance: 71.920577ms
	I1210 01:09:15.746915  132605 start.go:83] releasing machines lock for "no-preload-584179", held for 19.588029336s
	I1210 01:09:15.746945  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.747285  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:15.750451  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.750900  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.750929  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.751162  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.751698  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.751882  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:15.751964  132605 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1210 01:09:15.752035  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.752082  132605 ssh_runner.go:195] Run: cat /version.json
	I1210 01:09:15.752104  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:15.754825  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755065  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755249  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.755269  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755457  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.755549  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:15.755585  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:15.755624  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.755718  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:15.755807  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.755929  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:15.755997  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.756266  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:15.756431  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:15.834820  132605 ssh_runner.go:195] Run: systemctl --version
	I1210 01:09:15.859263  132605 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1210 01:09:16.006149  132605 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1210 01:09:16.012040  132605 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1210 01:09:16.012116  132605 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1210 01:09:16.026410  132605 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1210 01:09:16.026435  132605 start.go:495] detecting cgroup driver to use...
	I1210 01:09:16.026508  132605 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1210 01:09:16.040833  132605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1210 01:09:16.053355  132605 docker.go:217] disabling cri-docker service (if available) ...
	I1210 01:09:16.053404  132605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1210 01:09:16.066169  132605 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1210 01:09:16.078906  132605 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1210 01:09:16.183645  132605 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1210 01:09:16.338131  132605 docker.go:233] disabling docker service ...
	I1210 01:09:16.338210  132605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1210 01:09:16.353706  132605 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1210 01:09:16.367025  132605 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1210 01:09:16.490857  132605 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1210 01:09:16.599213  132605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1210 01:09:16.612423  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1210 01:09:16.628989  132605 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1210 01:09:16.629051  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.638381  132605 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1210 01:09:16.638443  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.648140  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.657702  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.667303  132605 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1210 01:09:16.677058  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.686261  132605 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.701267  132605 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1210 01:09:16.710630  132605 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1210 01:09:16.719338  132605 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1210 01:09:16.719399  132605 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1210 01:09:16.730675  132605 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1210 01:09:16.739704  132605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:16.855267  132605 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1210 01:09:16.945551  132605 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1210 01:09:16.945636  132605 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1210 01:09:16.950041  132605 start.go:563] Will wait 60s for crictl version
	I1210 01:09:16.950089  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:16.953415  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1210 01:09:16.986363  132605 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1210 01:09:16.986452  132605 ssh_runner.go:195] Run: crio --version
	I1210 01:09:17.013313  132605 ssh_runner.go:195] Run: crio --version
	I1210 01:09:17.040732  132605 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1210 01:09:17.042078  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetIP
	I1210 01:09:17.044697  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:17.044992  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:17.045017  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:17.045180  132605 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1210 01:09:17.048776  132605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:09:17.059862  132605 kubeadm.go:883] updating cluster {Name:no-preload-584179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-584179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1210 01:09:17.059969  132605 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1210 01:09:17.060002  132605 ssh_runner.go:195] Run: sudo crictl images --output json
	I1210 01:09:17.092954  132605 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1210 01:09:17.092981  132605 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1210 01:09:17.093021  132605 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:17.093063  132605 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.093076  132605 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.093096  132605 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1210 01:09:17.093157  132605 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.093084  132605 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.093235  132605 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.093250  132605 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.094742  132605 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1210 01:09:17.094787  132605 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.094804  132605 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.094742  132605 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.094810  132605 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:17.094753  132605 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.094820  132605 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.094765  132605 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:14.765671  133282 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:15.750454  133282 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:15.750473  133282 pod_ready.go:82] duration metric: took 5.507439947s for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:15.750486  133282 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:14.759976  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:15.259717  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:15.760410  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:16.260034  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:16.759708  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:17.260433  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:17.760687  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:18.260284  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:18.760557  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:19.260362  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:18.290233  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:20.291198  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:17.246846  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.248658  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.250095  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.254067  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.256089  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.278344  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1210 01:09:17.278473  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.369439  132605 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1210 01:09:17.369501  132605 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.369501  132605 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1210 01:09:17.369540  132605 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.369553  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.369604  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.417953  132605 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1210 01:09:17.418006  132605 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.418052  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.423233  132605 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1210 01:09:17.423274  132605 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1210 01:09:17.423281  132605 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.423306  132605 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.423326  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.423352  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.423429  132605 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1210 01:09:17.423469  132605 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.423503  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.505918  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.505973  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.505933  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.506033  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.506057  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.506093  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.622808  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.635839  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.637443  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.637478  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.637587  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.637611  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.688747  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1210 01:09:17.768097  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1210 01:09:17.768175  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1210 01:09:17.768211  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1210 01:09:17.768320  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1210 01:09:17.768313  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1210 01:09:17.805141  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1210 01:09:17.805252  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1210 01:09:17.885468  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1210 01:09:17.885628  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1210 01:09:17.893263  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1210 01:09:17.893312  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1210 01:09:17.893335  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1210 01:09:17.893381  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 01:09:17.893399  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1210 01:09:17.893411  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 01:09:17.893417  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 01:09:17.893464  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1210 01:09:17.893479  132605 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1210 01:09:17.893454  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 01:09:17.893518  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1210 01:09:17.895148  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1210 01:09:18.009923  132605 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:21.497870  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.604325674s)
	I1210 01:09:21.497908  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1210 01:09:21.497931  132605 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1210 01:09:21.497925  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2: (3.604515411s)
	I1210 01:09:21.497964  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2: (3.604523853s)
	I1210 01:09:21.497980  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1210 01:09:21.497988  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1210 01:09:21.497968  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1210 01:09:21.498030  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2: (3.604504871s)
	I1210 01:09:21.498065  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1210 01:09:21.498092  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2: (3.604626001s)
	I1210 01:09:21.498135  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1210 01:09:21.498137  132605 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.48818734s)
	I1210 01:09:21.498180  132605 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1210 01:09:21.498210  132605 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:21.498262  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:09:17.758044  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:20.257446  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:19.759901  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:20.260224  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:20.760523  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:21.259846  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:21.759997  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:22.259939  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:22.760414  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:23.260359  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:23.760075  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:24.260519  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:22.291428  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:24.291612  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:26.791400  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:23.369885  132605 ssh_runner.go:235] Completed: which crictl: (1.871582184s)
	I1210 01:09:23.369947  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.871938064s)
	I1210 01:09:23.369967  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1210 01:09:23.369976  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:23.370000  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 01:09:23.370042  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1210 01:09:25.661942  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.291860829s)
	I1210 01:09:25.661984  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1210 01:09:25.661990  132605 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.291995779s)
	I1210 01:09:25.662011  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 01:09:25.662066  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1210 01:09:25.662066  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:27.025354  132605 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.36318975s)
	I1210 01:09:27.025446  132605 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:27.025517  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.363423006s)
	I1210 01:09:27.025546  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1210 01:09:27.025604  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 01:09:27.025677  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1210 01:09:27.063571  132605 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1210 01:09:27.063700  132605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1210 01:09:22.757215  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:24.757584  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:27.256535  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:24.760537  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:25.259994  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:25.760205  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:26.260504  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:26.759648  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:27.259995  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:27.760383  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:28.259992  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:28.760004  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:29.260496  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:28.813963  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:30.837175  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:29.106253  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.080542846s)
	I1210 01:09:29.106295  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1210 01:09:29.106312  132605 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.042586527s)
	I1210 01:09:29.106326  132605 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 01:09:29.106345  132605 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1210 01:09:29.106392  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1210 01:09:30.968622  132605 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.862203504s)
	I1210 01:09:30.968650  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1210 01:09:30.968679  132605 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1210 01:09:30.968732  132605 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1210 01:09:31.612519  132605 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20062-79135/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1210 01:09:31.612559  132605 cache_images.go:123] Successfully loaded all cached images
	I1210 01:09:31.612564  132605 cache_images.go:92] duration metric: took 14.519573158s to LoadCachedImages
	I1210 01:09:31.612577  132605 kubeadm.go:934] updating node { 192.168.50.169 8443 v1.31.2 crio true true} ...
	I1210 01:09:31.612686  132605 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-584179 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-584179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1210 01:09:31.612750  132605 ssh_runner.go:195] Run: crio config
	I1210 01:09:31.661155  132605 cni.go:84] Creating CNI manager for ""
	I1210 01:09:31.661185  132605 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:31.661199  132605 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1210 01:09:31.661228  132605 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.169 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-584179 NodeName:no-preload-584179 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.169"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.169 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1210 01:09:31.661406  132605 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.169
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-584179"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.169"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.169"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1210 01:09:31.661511  132605 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1210 01:09:31.671185  132605 binaries.go:44] Found k8s binaries, skipping transfer
	I1210 01:09:31.671259  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1210 01:09:31.679776  132605 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1210 01:09:31.694290  132605 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1210 01:09:31.708644  132605 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1210 01:09:31.725292  132605 ssh_runner.go:195] Run: grep 192.168.50.169	control-plane.minikube.internal$ /etc/hosts
	I1210 01:09:31.729070  132605 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.169	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1210 01:09:31.740077  132605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:31.857074  132605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:09:31.872257  132605 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179 for IP: 192.168.50.169
	I1210 01:09:31.872280  132605 certs.go:194] generating shared ca certs ...
	I1210 01:09:31.872314  132605 certs.go:226] acquiring lock for ca certs: {Name:mk82048ce3206adab88c39bd4bfb12d93c4bec87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:31.872515  132605 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key
	I1210 01:09:31.872569  132605 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key
	I1210 01:09:31.872579  132605 certs.go:256] generating profile certs ...
	I1210 01:09:31.872694  132605 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/client.key
	I1210 01:09:31.872775  132605 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/apiserver.key.0a939830
	I1210 01:09:31.872828  132605 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/proxy-client.key
	I1210 01:09:31.872979  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem (1338 bytes)
	W1210 01:09:31.873020  132605 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296_empty.pem, impossibly tiny 0 bytes
	I1210 01:09:31.873034  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca-key.pem (1675 bytes)
	I1210 01:09:31.873069  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/ca.pem (1078 bytes)
	I1210 01:09:31.873098  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/cert.pem (1123 bytes)
	I1210 01:09:31.873127  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/certs/key.pem (1675 bytes)
	I1210 01:09:31.873188  132605 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem (1708 bytes)
	I1210 01:09:31.874099  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1210 01:09:31.906792  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1210 01:09:31.939994  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1210 01:09:31.965628  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1210 01:09:31.992020  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1210 01:09:32.015601  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1210 01:09:32.048113  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1210 01:09:32.069416  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1210 01:09:32.090144  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/ssl/certs/862962.pem --> /usr/share/ca-certificates/862962.pem (1708 bytes)
	I1210 01:09:32.111484  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1210 01:09:32.135390  132605 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-79135/.minikube/certs/86296.pem --> /usr/share/ca-certificates/86296.pem (1338 bytes)
	I1210 01:09:32.157978  132605 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1210 01:09:32.173851  132605 ssh_runner.go:195] Run: openssl version
	I1210 01:09:32.179068  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/862962.pem && ln -fs /usr/share/ca-certificates/862962.pem /etc/ssl/certs/862962.pem"
	I1210 01:09:32.188602  132605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/862962.pem
	I1210 01:09:32.192585  132605 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:55 /usr/share/ca-certificates/862962.pem
	I1210 01:09:32.192629  132605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/862962.pem
	I1210 01:09:32.197637  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/862962.pem /etc/ssl/certs/3ec20f2e.0"
	I1210 01:09:32.207401  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1210 01:09:32.216700  132605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:29.756368  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:31.756948  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:29.760244  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:30.260534  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:30.760426  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:31.259767  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:31.759951  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:32.259919  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:32.760161  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:33.260272  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:33.759885  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:34.259970  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:33.290818  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:35.790889  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:32.220620  132605 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:44 /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:32.220663  132605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1210 01:09:32.225661  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1210 01:09:32.235325  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/86296.pem && ln -fs /usr/share/ca-certificates/86296.pem /etc/ssl/certs/86296.pem"
	I1210 01:09:32.244746  132605 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/86296.pem
	I1210 01:09:32.248733  132605 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:55 /usr/share/ca-certificates/86296.pem
	I1210 01:09:32.248774  132605 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/86296.pem
	I1210 01:09:32.254022  132605 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/86296.pem /etc/ssl/certs/51391683.0"
	I1210 01:09:32.264208  132605 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1210 01:09:32.268332  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1210 01:09:32.273902  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1210 01:09:32.279525  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1210 01:09:32.284958  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1210 01:09:32.291412  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1210 01:09:32.296527  132605 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1210 01:09:32.302123  132605 kubeadm.go:392] StartCluster: {Name:no-preload-584179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-584179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 01:09:32.302233  132605 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1210 01:09:32.302293  132605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:32.345135  132605 cri.go:89] found id: ""
	I1210 01:09:32.345212  132605 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1210 01:09:32.355077  132605 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1210 01:09:32.355093  132605 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1210 01:09:32.355131  132605 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1210 01:09:32.364021  132605 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1210 01:09:32.365012  132605 kubeconfig.go:125] found "no-preload-584179" server: "https://192.168.50.169:8443"
	I1210 01:09:32.367348  132605 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1210 01:09:32.375938  132605 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.169
	I1210 01:09:32.375967  132605 kubeadm.go:1160] stopping kube-system containers ...
	I1210 01:09:32.375979  132605 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1210 01:09:32.376032  132605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1210 01:09:32.408948  132605 cri.go:89] found id: ""
	I1210 01:09:32.409014  132605 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1210 01:09:32.427628  132605 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:09:32.437321  132605 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:09:32.437348  132605 kubeadm.go:157] found existing configuration files:
	
	I1210 01:09:32.437391  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:09:32.446114  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:09:32.446155  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:09:32.455531  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:09:32.465558  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:09:32.465611  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:09:32.475265  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:09:32.483703  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:09:32.483750  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:09:32.492041  132605 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:09:32.499895  132605 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:09:32.499948  132605 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:09:32.508205  132605 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:09:32.516625  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:32.628252  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:33.675979  132605 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.04768244s)
	I1210 01:09:33.676029  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:33.873465  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:33.951722  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:34.064512  132605 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:09:34.064627  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:34.565753  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.065163  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.104915  132605 api_server.go:72] duration metric: took 1.040405424s to wait for apiserver process to appear ...
	I1210 01:09:35.104944  132605 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:09:35.104970  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:35.105426  132605 api_server.go:269] stopped: https://192.168.50.169:8443/healthz: Get "https://192.168.50.169:8443/healthz": dial tcp 192.168.50.169:8443: connect: connection refused
	I1210 01:09:35.606063  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:34.256982  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:36.756184  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:38.326687  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:38.326719  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:38.326736  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:38.400207  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1210 01:09:38.400236  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1210 01:09:38.605572  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:38.610811  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:38.610849  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:39.105424  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:39.117268  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1210 01:09:39.117303  132605 api_server.go:103] status: https://192.168.50.169:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1210 01:09:39.605417  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:09:39.614444  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 200:
	ok
	I1210 01:09:39.620993  132605 api_server.go:141] control plane version: v1.31.2
	I1210 01:09:39.621020  132605 api_server.go:131] duration metric: took 4.51606815s to wait for apiserver health ...
	I1210 01:09:39.621032  132605 cni.go:84] Creating CNI manager for ""
	I1210 01:09:39.621041  132605 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:09:34.759835  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.260276  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:35.759791  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:36.259684  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:36.760649  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:37.259922  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:37.760558  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:38.260712  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:38.759679  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:39.259678  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:39.622539  132605 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:09:39.623685  132605 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:09:39.643844  132605 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:09:39.678622  132605 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:09:39.692082  132605 system_pods.go:59] 8 kube-system pods found
	I1210 01:09:39.692124  132605 system_pods.go:61] "coredns-7c65d6cfc9-hhsm5" [dddb227a-7c16-4acd-be5f-1ab38b78129c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1210 01:09:39.692133  132605 system_pods.go:61] "etcd-no-preload-584179" [acba2a13-196f-4ff9-8151-6b88578d532d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1210 01:09:39.692141  132605 system_pods.go:61] "kube-apiserver-no-preload-584179" [20a67076-4ff2-4f31-b245-bf4079cd11d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1210 01:09:39.692149  132605 system_pods.go:61] "kube-controller-manager-no-preload-584179" [d7a2e531-1609-4c8a-b756-d9819339ed27] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1210 01:09:39.692154  132605 system_pods.go:61] "kube-proxy-xcjs2" [ec6cf5b1-3ea9-4868-874d-61e262cca0c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1210 01:09:39.692162  132605 system_pods.go:61] "kube-scheduler-no-preload-584179" [998543da-3056-4960-8762-9ab3dbd1925a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1210 01:09:39.692174  132605 system_pods.go:61] "metrics-server-6867b74b74-lwgxd" [0e7f1063-8508-4f5b-b8ff-bbd387a53919] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:09:39.692183  132605 system_pods.go:61] "storage-provisioner" [31180637-f48e-4dda-8ec3-56155bb300cf] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1210 01:09:39.692200  132605 system_pods.go:74] duration metric: took 13.548523ms to wait for pod list to return data ...
	I1210 01:09:39.692214  132605 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:09:39.696707  132605 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:09:39.696740  132605 node_conditions.go:123] node cpu capacity is 2
	I1210 01:09:39.696754  132605 node_conditions.go:105] duration metric: took 4.534393ms to run NodePressure ...
	I1210 01:09:39.696781  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1210 01:09:39.977595  132605 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1210 01:09:39.981694  132605 kubeadm.go:739] kubelet initialised
	I1210 01:09:39.981714  132605 kubeadm.go:740] duration metric: took 4.094235ms waiting for restarted kubelet to initialise ...
	I1210 01:09:39.981724  132605 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:39.987484  132605 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:39.992414  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.992434  132605 pod_ready.go:82] duration metric: took 4.925954ms for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:39.992442  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.992448  132605 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:39.996262  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "etcd-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.996291  132605 pod_ready.go:82] duration metric: took 3.826925ms for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:39.996301  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "etcd-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:39.996309  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.000642  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-apiserver-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.000659  132605 pod_ready.go:82] duration metric: took 4.340955ms for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.000668  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-apiserver-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.000676  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.082165  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.082191  132605 pod_ready.go:82] duration metric: took 81.505218ms for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.082204  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.082214  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.483273  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-proxy-xcjs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.483306  132605 pod_ready.go:82] duration metric: took 401.082947ms for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.483318  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-proxy-xcjs2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.483329  132605 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:40.882587  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "kube-scheduler-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.882617  132605 pod_ready.go:82] duration metric: took 399.278598ms for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:40.882629  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "kube-scheduler-no-preload-584179" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:40.882641  132605 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:41.281474  132605 pod_ready.go:98] node "no-preload-584179" hosting pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:41.281502  132605 pod_ready.go:82] duration metric: took 398.850415ms for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	E1210 01:09:41.281516  132605 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-584179" hosting pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:41.281526  132605 pod_ready.go:39] duration metric: took 1.299793175s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:41.281547  132605 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 01:09:41.293293  132605 ops.go:34] apiserver oom_adj: -16
	I1210 01:09:41.293310  132605 kubeadm.go:597] duration metric: took 8.938211553s to restartPrimaryControlPlane
	I1210 01:09:41.293318  132605 kubeadm.go:394] duration metric: took 8.991203373s to StartCluster
	I1210 01:09:41.293334  132605 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:41.293389  132605 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:09:41.295054  132605 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:09:41.295293  132605 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.169 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 01:09:41.295376  132605 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 01:09:41.295496  132605 addons.go:69] Setting storage-provisioner=true in profile "no-preload-584179"
	I1210 01:09:41.295519  132605 addons.go:234] Setting addon storage-provisioner=true in "no-preload-584179"
	W1210 01:09:41.295529  132605 addons.go:243] addon storage-provisioner should already be in state true
	I1210 01:09:41.295527  132605 config.go:182] Loaded profile config "no-preload-584179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:09:41.295581  132605 host.go:66] Checking if "no-preload-584179" exists ...
	I1210 01:09:41.295588  132605 addons.go:69] Setting metrics-server=true in profile "no-preload-584179"
	I1210 01:09:41.295602  132605 addons.go:234] Setting addon metrics-server=true in "no-preload-584179"
	I1210 01:09:41.295604  132605 addons.go:69] Setting default-storageclass=true in profile "no-preload-584179"
	W1210 01:09:41.295615  132605 addons.go:243] addon metrics-server should already be in state true
	I1210 01:09:41.295627  132605 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-584179"
	I1210 01:09:41.295643  132605 host.go:66] Checking if "no-preload-584179" exists ...
	I1210 01:09:41.295906  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.295951  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.296035  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.296052  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.296089  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.296134  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.296994  132605 out.go:177] * Verifying Kubernetes components...
	I1210 01:09:41.298351  132605 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:09:41.312841  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32935
	I1210 01:09:41.313326  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.313883  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.313906  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.314202  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.314798  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.314846  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.316718  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45347
	I1210 01:09:41.317263  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.317829  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.317857  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.318269  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.318870  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.318916  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.329929  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45969
	I1210 01:09:41.330341  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.330879  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.330894  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.331331  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.331505  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.332041  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43911
	I1210 01:09:41.332457  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.333084  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.333107  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.333516  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.333728  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.335268  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42073
	I1210 01:09:41.336123  132605 addons.go:234] Setting addon default-storageclass=true in "no-preload-584179"
	W1210 01:09:41.336137  132605 addons.go:243] addon default-storageclass should already be in state true
	I1210 01:09:41.336161  132605 host.go:66] Checking if "no-preload-584179" exists ...
	I1210 01:09:41.336395  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.336422  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.336596  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.336686  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:41.337074  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.337088  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.337468  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.337656  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.338494  132605 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:09:41.339130  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:41.339843  132605 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:09:41.339856  132605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 01:09:41.339870  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:41.341253  132605 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 01:09:37.793895  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:40.291282  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:41.342436  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.342604  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 01:09:41.342620  132605 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 01:09:41.342633  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:41.342844  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:41.342861  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.343122  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:41.343399  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:41.343569  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:41.343683  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:41.345344  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.345814  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:41.345834  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.345982  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:41.346159  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:41.346293  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:41.346431  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:41.352593  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38283
	I1210 01:09:41.352930  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.353292  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.353307  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.353545  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.354016  132605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:09:41.354045  132605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:09:41.370168  132605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46765
	I1210 01:09:41.370736  132605 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:09:41.371289  132605 main.go:141] libmachine: Using API Version  1
	I1210 01:09:41.371315  132605 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:09:41.371670  132605 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:09:41.371879  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetState
	I1210 01:09:41.373679  132605 main.go:141] libmachine: (no-preload-584179) Calling .DriverName
	I1210 01:09:41.374802  132605 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 01:09:41.374821  132605 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 01:09:41.374841  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHHostname
	I1210 01:09:41.377611  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.378065  132605 main.go:141] libmachine: (no-preload-584179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:5e:a7", ip: ""} in network mk-no-preload-584179: {Iface:virbr2 ExpiryTime:2024-12-10 02:09:07 +0000 UTC Type:0 Mac:52:54:00:94:5e:a7 Iaid: IPaddr:192.168.50.169 Prefix:24 Hostname:no-preload-584179 Clientid:01:52:54:00:94:5e:a7}
	I1210 01:09:41.378089  132605 main.go:141] libmachine: (no-preload-584179) DBG | domain no-preload-584179 has defined IP address 192.168.50.169 and MAC address 52:54:00:94:5e:a7 in network mk-no-preload-584179
	I1210 01:09:41.378261  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHPort
	I1210 01:09:41.378411  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHKeyPath
	I1210 01:09:41.378571  132605 main.go:141] libmachine: (no-preload-584179) Calling .GetSSHUsername
	I1210 01:09:41.378711  132605 sshutil.go:53] new ssh client: &{IP:192.168.50.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/no-preload-584179/id_rsa Username:docker}
	I1210 01:09:41.492956  132605 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:09:41.510713  132605 node_ready.go:35] waiting up to 6m0s for node "no-preload-584179" to be "Ready" ...
	I1210 01:09:41.591523  132605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:09:41.612369  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 01:09:41.612393  132605 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 01:09:41.641040  132605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 01:09:41.672955  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 01:09:41.672982  132605 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 01:09:41.720885  132605 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:09:41.720921  132605 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 01:09:41.773885  132605 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:09:39.256804  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:41.758321  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:42.945125  132605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.304042618s)
	I1210 01:09:42.945192  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945207  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945233  132605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.171304002s)
	I1210 01:09:42.945292  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945310  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945452  132605 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.353900883s)
	I1210 01:09:42.945476  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945488  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945543  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.945556  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.945587  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.945601  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.945609  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945616  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945819  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.945847  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.945832  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.945856  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.945863  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.945897  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.945907  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.945916  132605 addons.go:475] Verifying addon metrics-server=true in "no-preload-584179"
	I1210 01:09:42.945926  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.946083  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.946115  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.946120  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.946659  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.946679  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.946690  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.946699  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.946960  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.946976  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.954783  132605 main.go:141] libmachine: Making call to close driver server
	I1210 01:09:42.954805  132605 main.go:141] libmachine: (no-preload-584179) Calling .Close
	I1210 01:09:42.955037  132605 main.go:141] libmachine: (no-preload-584179) DBG | Closing plugin on server side
	I1210 01:09:42.955056  132605 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:09:42.955101  132605 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:09:42.956592  132605 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1210 01:09:39.759613  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:40.260466  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:40.760527  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:41.260450  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:41.759950  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:42.260075  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:42.760661  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:43.259780  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:43.759690  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:44.260376  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:42.791249  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:45.290804  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:42.957891  132605 addons.go:510] duration metric: took 1.66252058s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I1210 01:09:43.514278  132605 node_ready.go:53] node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:45.514855  132605 node_ready.go:53] node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:44.256730  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:46.257699  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:44.759802  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:45.260533  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:45.760410  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:45.760500  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:45.797499  133241 cri.go:89] found id: ""
	I1210 01:09:45.797522  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.797533  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:45.797539  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:45.797596  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:45.827841  133241 cri.go:89] found id: ""
	I1210 01:09:45.827872  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.827885  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:45.827893  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:45.827952  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:45.861227  133241 cri.go:89] found id: ""
	I1210 01:09:45.861251  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.861259  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:45.861264  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:45.861331  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:45.895142  133241 cri.go:89] found id: ""
	I1210 01:09:45.895174  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.895185  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:45.895191  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:45.895266  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:45.931113  133241 cri.go:89] found id: ""
	I1210 01:09:45.931146  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.931157  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:45.931164  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:45.931251  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:45.964348  133241 cri.go:89] found id: ""
	I1210 01:09:45.964388  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.964396  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:45.964402  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:45.964453  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:45.997808  133241 cri.go:89] found id: ""
	I1210 01:09:45.997829  133241 logs.go:282] 0 containers: []
	W1210 01:09:45.997837  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:45.997842  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:45.997888  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:46.028464  133241 cri.go:89] found id: ""
	I1210 01:09:46.028490  133241 logs.go:282] 0 containers: []
	W1210 01:09:46.028499  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:46.028508  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:46.028524  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:46.136225  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:46.136257  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:46.136275  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:46.211654  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:46.211686  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:46.254008  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:46.254046  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:46.305985  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:46.306020  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:48.818889  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:48.831511  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:48.831575  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:48.863536  133241 cri.go:89] found id: ""
	I1210 01:09:48.863566  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.863577  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:48.863585  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:48.863642  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:48.895340  133241 cri.go:89] found id: ""
	I1210 01:09:48.895362  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.895371  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:48.895378  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:48.895439  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:48.930962  133241 cri.go:89] found id: ""
	I1210 01:09:48.930989  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.930997  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:48.931003  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:48.931060  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:48.966437  133241 cri.go:89] found id: ""
	I1210 01:09:48.966467  133241 logs.go:282] 0 containers: []
	W1210 01:09:48.966479  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:48.966488  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:48.966553  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:49.001290  133241 cri.go:89] found id: ""
	I1210 01:09:49.001321  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.001333  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:49.001340  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:49.001404  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:49.036472  133241 cri.go:89] found id: ""
	I1210 01:09:49.036499  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.036510  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:49.036532  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:49.036609  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:49.066550  133241 cri.go:89] found id: ""
	I1210 01:09:49.066589  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.066600  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:49.066607  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:49.066669  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:49.097358  133241 cri.go:89] found id: ""
	I1210 01:09:49.097383  133241 logs.go:282] 0 containers: []
	W1210 01:09:49.097392  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:49.097402  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:49.097413  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:49.170082  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:49.170116  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:49.209684  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:49.209747  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:49.268714  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:49.268755  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:49.281979  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:49.282014  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:49.350901  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:47.790228  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:49.791158  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:48.014087  132605 node_ready.go:53] node "no-preload-584179" has status "Ready":"False"
	I1210 01:09:49.014932  132605 node_ready.go:49] node "no-preload-584179" has status "Ready":"True"
	I1210 01:09:49.014960  132605 node_ready.go:38] duration metric: took 7.504211405s for node "no-preload-584179" to be "Ready" ...
	I1210 01:09:49.014974  132605 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:09:49.020519  132605 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:49.025466  132605 pod_ready.go:93] pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:49.025489  132605 pod_ready.go:82] duration metric: took 4.945455ms for pod "coredns-7c65d6cfc9-hhsm5" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:49.025501  132605 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.031580  132605 pod_ready.go:103] pod "etcd-no-preload-584179" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:51.532544  132605 pod_ready.go:93] pod "etcd-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.532570  132605 pod_ready.go:82] duration metric: took 2.507060173s for pod "etcd-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.532582  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.537498  132605 pod_ready.go:93] pod "kube-apiserver-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.537516  132605 pod_ready.go:82] duration metric: took 4.927374ms for pod "kube-apiserver-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.537525  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.542147  132605 pod_ready.go:93] pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.542161  132605 pod_ready.go:82] duration metric: took 4.630752ms for pod "kube-controller-manager-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.542169  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.546645  132605 pod_ready.go:93] pod "kube-proxy-xcjs2" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.546660  132605 pod_ready.go:82] duration metric: took 4.486291ms for pod "kube-proxy-xcjs2" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.546667  132605 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.815308  132605 pod_ready.go:93] pod "kube-scheduler-no-preload-584179" in "kube-system" namespace has status "Ready":"True"
	I1210 01:09:51.815333  132605 pod_ready.go:82] duration metric: took 268.661005ms for pod "kube-scheduler-no-preload-584179" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:51.815343  132605 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	I1210 01:09:48.756571  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:51.256434  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:51.851559  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:51.864804  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:51.864862  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:51.907102  133241 cri.go:89] found id: ""
	I1210 01:09:51.907141  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.907154  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:51.907162  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:51.907218  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:51.937672  133241 cri.go:89] found id: ""
	I1210 01:09:51.937695  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.937702  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:51.937708  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:51.937755  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:51.966886  133241 cri.go:89] found id: ""
	I1210 01:09:51.966911  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.966919  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:51.966925  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:51.966981  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:51.996806  133241 cri.go:89] found id: ""
	I1210 01:09:51.996830  133241 logs.go:282] 0 containers: []
	W1210 01:09:51.996838  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:51.996844  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:51.996901  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:52.028041  133241 cri.go:89] found id: ""
	I1210 01:09:52.028083  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.028091  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:52.028097  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:52.028150  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:52.057921  133241 cri.go:89] found id: ""
	I1210 01:09:52.057946  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.057954  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:52.057960  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:52.058010  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:52.088367  133241 cri.go:89] found id: ""
	I1210 01:09:52.088406  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.088415  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:52.088422  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:52.088487  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:52.117636  133241 cri.go:89] found id: ""
	I1210 01:09:52.117667  133241 logs.go:282] 0 containers: []
	W1210 01:09:52.117679  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:52.117691  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:52.117705  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:52.151628  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:52.151655  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:52.202083  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:52.202116  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:52.214973  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:52.215009  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:52.282101  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:52.282126  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:52.282139  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:52.290617  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:54.790008  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:56.790504  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:53.820512  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:55.824852  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:53.258005  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:55.755992  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:54.862326  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:54.874349  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:54.874418  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:54.906983  133241 cri.go:89] found id: ""
	I1210 01:09:54.907006  133241 logs.go:282] 0 containers: []
	W1210 01:09:54.907013  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:54.907019  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:54.907069  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:54.938187  133241 cri.go:89] found id: ""
	I1210 01:09:54.938213  133241 logs.go:282] 0 containers: []
	W1210 01:09:54.938221  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:54.938226  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:54.938290  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:54.974481  133241 cri.go:89] found id: ""
	I1210 01:09:54.974514  133241 logs.go:282] 0 containers: []
	W1210 01:09:54.974526  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:54.974534  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:54.974619  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:55.005904  133241 cri.go:89] found id: ""
	I1210 01:09:55.005928  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.005941  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:55.005949  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:55.006015  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:55.037698  133241 cri.go:89] found id: ""
	I1210 01:09:55.037729  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.037741  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:55.037748  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:55.037816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:55.067926  133241 cri.go:89] found id: ""
	I1210 01:09:55.067958  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.067966  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:55.067971  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:55.068016  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:55.098309  133241 cri.go:89] found id: ""
	I1210 01:09:55.098333  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.098341  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:55.098349  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:55.098400  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:55.145177  133241 cri.go:89] found id: ""
	I1210 01:09:55.145212  133241 logs.go:282] 0 containers: []
	W1210 01:09:55.145221  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:55.145231  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:55.145243  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:55.193307  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:55.193338  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:55.205536  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:55.205558  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:55.271248  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:55.271276  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:55.271295  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:55.349465  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:55.349503  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:57.887749  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:09:57.899698  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:09:57.899765  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:09:57.933170  133241 cri.go:89] found id: ""
	I1210 01:09:57.933196  133241 logs.go:282] 0 containers: []
	W1210 01:09:57.933206  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:09:57.933214  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:09:57.933282  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:09:57.964237  133241 cri.go:89] found id: ""
	I1210 01:09:57.964271  133241 logs.go:282] 0 containers: []
	W1210 01:09:57.964284  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:09:57.964292  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:09:57.964360  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:09:57.996447  133241 cri.go:89] found id: ""
	I1210 01:09:57.996481  133241 logs.go:282] 0 containers: []
	W1210 01:09:57.996493  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:09:57.996501  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:09:57.996562  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:09:58.030007  133241 cri.go:89] found id: ""
	I1210 01:09:58.030034  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.030046  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:09:58.030054  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:09:58.030120  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:09:58.063634  133241 cri.go:89] found id: ""
	I1210 01:09:58.063667  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.063678  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:09:58.063686  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:09:58.063748  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:09:58.095076  133241 cri.go:89] found id: ""
	I1210 01:09:58.095105  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.095114  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:09:58.095120  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:09:58.095177  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:09:58.127107  133241 cri.go:89] found id: ""
	I1210 01:09:58.127147  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.127160  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:09:58.127169  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:09:58.127243  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:09:58.158137  133241 cri.go:89] found id: ""
	I1210 01:09:58.158167  133241 logs.go:282] 0 containers: []
	W1210 01:09:58.158177  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:09:58.158190  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:09:58.158213  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:09:58.209195  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:09:58.209236  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:09:58.221816  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:09:58.221841  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:09:58.290396  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:09:58.290416  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:09:58.290430  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:09:58.370235  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:09:58.370265  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:09:58.791561  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:01.290503  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:58.321571  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:00.322349  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:09:58.256526  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:00.756754  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:00.908076  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:00.920898  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:00.920985  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:00.955432  133241 cri.go:89] found id: ""
	I1210 01:10:00.955469  133241 logs.go:282] 0 containers: []
	W1210 01:10:00.955481  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:00.955490  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:00.955550  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:00.987580  133241 cri.go:89] found id: ""
	I1210 01:10:00.987606  133241 logs.go:282] 0 containers: []
	W1210 01:10:00.987615  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:00.987621  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:00.987670  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:01.018741  133241 cri.go:89] found id: ""
	I1210 01:10:01.018766  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.018773  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:01.018781  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:01.018840  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:01.049817  133241 cri.go:89] found id: ""
	I1210 01:10:01.049849  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.049860  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:01.049879  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:01.049946  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:01.081736  133241 cri.go:89] found id: ""
	I1210 01:10:01.081765  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.081775  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:01.081781  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:01.081829  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:01.110990  133241 cri.go:89] found id: ""
	I1210 01:10:01.111015  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.111026  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:01.111034  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:01.111096  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:01.140737  133241 cri.go:89] found id: ""
	I1210 01:10:01.140767  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.140777  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:01.140785  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:01.140848  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:01.170628  133241 cri.go:89] found id: ""
	I1210 01:10:01.170662  133241 logs.go:282] 0 containers: []
	W1210 01:10:01.170674  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:01.170686  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:01.170701  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:01.222358  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:01.222389  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:01.235640  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:01.235668  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:01.302726  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:01.302745  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:01.302762  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:01.383817  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:01.383855  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:03.921112  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:03.933517  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:03.933592  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:03.967318  133241 cri.go:89] found id: ""
	I1210 01:10:03.967344  133241 logs.go:282] 0 containers: []
	W1210 01:10:03.967353  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:03.967358  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:03.967411  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:03.998743  133241 cri.go:89] found id: ""
	I1210 01:10:03.998768  133241 logs.go:282] 0 containers: []
	W1210 01:10:03.998776  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:03.998782  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:03.998842  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:04.033209  133241 cri.go:89] found id: ""
	I1210 01:10:04.033235  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.033247  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:04.033255  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:04.033319  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:04.064815  133241 cri.go:89] found id: ""
	I1210 01:10:04.064845  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.064857  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:04.064864  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:04.064921  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:04.098676  133241 cri.go:89] found id: ""
	I1210 01:10:04.098699  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.098707  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:04.098712  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:04.098763  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:04.129693  133241 cri.go:89] found id: ""
	I1210 01:10:04.129720  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.129732  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:04.129741  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:04.129809  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:04.162158  133241 cri.go:89] found id: ""
	I1210 01:10:04.162195  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.162203  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:04.162209  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:04.162276  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:04.194376  133241 cri.go:89] found id: ""
	I1210 01:10:04.194425  133241 logs.go:282] 0 containers: []
	W1210 01:10:04.194436  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:04.194446  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:04.194462  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:04.246674  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:04.246702  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:04.259142  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:04.259169  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:04.330034  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:04.330054  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:04.330067  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:04.410042  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:04.410089  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:03.790690  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:06.290723  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:02.821628  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:04.822691  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:06.823821  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:03.256410  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:05.756520  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:06.948623  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:06.960727  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:06.960811  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:06.993176  133241 cri.go:89] found id: ""
	I1210 01:10:06.993217  133241 logs.go:282] 0 containers: []
	W1210 01:10:06.993226  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:06.993231  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:06.993285  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:07.026420  133241 cri.go:89] found id: ""
	I1210 01:10:07.026449  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.026462  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:07.026469  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:07.026541  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:07.060810  133241 cri.go:89] found id: ""
	I1210 01:10:07.060837  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.060847  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:07.060855  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:07.060921  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:07.091336  133241 cri.go:89] found id: ""
	I1210 01:10:07.091376  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.091386  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:07.091393  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:07.091510  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:07.122715  133241 cri.go:89] found id: ""
	I1210 01:10:07.122750  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.122762  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:07.122770  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:07.122822  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:07.154444  133241 cri.go:89] found id: ""
	I1210 01:10:07.154479  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.154490  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:07.154496  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:07.154575  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:07.189571  133241 cri.go:89] found id: ""
	I1210 01:10:07.189601  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.189614  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:07.189622  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:07.189683  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:07.224455  133241 cri.go:89] found id: ""
	I1210 01:10:07.224480  133241 logs.go:282] 0 containers: []
	W1210 01:10:07.224489  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:07.224499  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:07.224512  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:07.240174  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:07.240214  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:07.344027  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:07.344062  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:07.344079  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:07.445219  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:07.445263  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:07.483205  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:07.483238  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:08.291335  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:10.789606  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:09.321098  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:11.321721  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:08.256670  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:10.256954  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:12.257117  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:10.034238  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:10.047042  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:10.047105  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:10.078622  133241 cri.go:89] found id: ""
	I1210 01:10:10.078654  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.078666  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:10.078675  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:10.078737  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:10.109353  133241 cri.go:89] found id: ""
	I1210 01:10:10.109379  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.109390  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:10.109398  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:10.109470  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:10.143036  133241 cri.go:89] found id: ""
	I1210 01:10:10.143065  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.143077  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:10.143084  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:10.143150  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:10.174938  133241 cri.go:89] found id: ""
	I1210 01:10:10.174966  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.174975  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:10.174981  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:10.175032  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:10.208680  133241 cri.go:89] found id: ""
	I1210 01:10:10.208709  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.208718  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:10.208724  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:10.208793  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:10.241153  133241 cri.go:89] found id: ""
	I1210 01:10:10.241189  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.241202  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:10.241213  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:10.241290  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:10.279405  133241 cri.go:89] found id: ""
	I1210 01:10:10.279437  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.279448  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:10.279457  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:10.279523  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:10.317915  133241 cri.go:89] found id: ""
	I1210 01:10:10.317943  133241 logs.go:282] 0 containers: []
	W1210 01:10:10.317953  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:10.317964  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:10.317980  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:10.370920  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:10.370955  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:10.385823  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:10.385867  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:10.452746  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:10.452774  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:10.452793  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:10.535218  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:10.535291  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:13.075172  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:13.090707  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:13.090785  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:13.141780  133241 cri.go:89] found id: ""
	I1210 01:10:13.141804  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.141812  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:13.141818  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:13.141869  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:13.172241  133241 cri.go:89] found id: ""
	I1210 01:10:13.172263  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.172271  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:13.172277  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:13.172339  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:13.200378  133241 cri.go:89] found id: ""
	I1210 01:10:13.200401  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.200410  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:13.200415  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:13.200472  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:13.232921  133241 cri.go:89] found id: ""
	I1210 01:10:13.232952  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.232964  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:13.232972  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:13.233088  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:13.265305  133241 cri.go:89] found id: ""
	I1210 01:10:13.265333  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.265344  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:13.265352  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:13.265411  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:13.299192  133241 cri.go:89] found id: ""
	I1210 01:10:13.299216  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.299226  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:13.299233  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:13.299306  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:13.332156  133241 cri.go:89] found id: ""
	I1210 01:10:13.332184  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.332195  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:13.332202  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:13.332259  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:13.365450  133241 cri.go:89] found id: ""
	I1210 01:10:13.365484  133241 logs.go:282] 0 containers: []
	W1210 01:10:13.365498  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:13.365511  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:13.365529  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:13.440807  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:13.440849  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:13.477283  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:13.477325  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:13.527481  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:13.527514  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:13.540146  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:13.540178  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:13.602711  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:12.790714  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:15.290963  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:13.820293  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:15.821845  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:14.755454  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:16.756574  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:16.103789  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:16.116124  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:16.116204  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:16.153057  133241 cri.go:89] found id: ""
	I1210 01:10:16.153082  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.153102  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:16.153109  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:16.153162  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:16.186489  133241 cri.go:89] found id: ""
	I1210 01:10:16.186517  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.186528  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:16.186535  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:16.186613  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:16.216369  133241 cri.go:89] found id: ""
	I1210 01:10:16.216404  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.216415  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:16.216423  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:16.216482  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:16.246254  133241 cri.go:89] found id: ""
	I1210 01:10:16.246282  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.246292  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:16.246299  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:16.246361  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:16.277815  133241 cri.go:89] found id: ""
	I1210 01:10:16.277844  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.277855  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:16.277866  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:16.277931  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:16.312101  133241 cri.go:89] found id: ""
	I1210 01:10:16.312132  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.312141  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:16.312147  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:16.312202  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:16.350273  133241 cri.go:89] found id: ""
	I1210 01:10:16.350299  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.350307  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:16.350313  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:16.350376  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:16.388091  133241 cri.go:89] found id: ""
	I1210 01:10:16.388113  133241 logs.go:282] 0 containers: []
	W1210 01:10:16.388121  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:16.388130  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:16.388150  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:16.456039  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:16.456066  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:16.456085  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:16.534919  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:16.534950  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:16.581598  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:16.581639  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:16.631479  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:16.631515  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:19.143852  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:19.156229  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:19.156300  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:19.186482  133241 cri.go:89] found id: ""
	I1210 01:10:19.186506  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.186514  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:19.186521  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:19.186585  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:19.216945  133241 cri.go:89] found id: ""
	I1210 01:10:19.216967  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.216975  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:19.216983  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:19.217060  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:19.247628  133241 cri.go:89] found id: ""
	I1210 01:10:19.247656  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.247666  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:19.247672  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:19.247719  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:19.281256  133241 cri.go:89] found id: ""
	I1210 01:10:19.281287  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.281297  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:19.281303  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:19.281364  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:19.315123  133241 cri.go:89] found id: ""
	I1210 01:10:19.315156  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.315168  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:19.315176  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:19.315246  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:19.349687  133241 cri.go:89] found id: ""
	I1210 01:10:19.349714  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.349725  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:19.349733  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:19.349797  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:19.381019  133241 cri.go:89] found id: ""
	I1210 01:10:19.381046  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.381058  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:19.381065  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:19.381129  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:19.413983  133241 cri.go:89] found id: ""
	I1210 01:10:19.414023  133241 logs.go:282] 0 containers: []
	W1210 01:10:19.414035  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:19.414048  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:19.414063  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:19.453812  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:19.453848  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:19.504016  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:19.504049  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:19.517665  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:19.517695  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:19.583777  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:19.583807  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:19.583825  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:17.790792  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:20.290934  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:17.821893  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:20.320787  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:19.256192  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:21.256740  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:22.160219  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:22.172908  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:22.172984  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:22.203634  133241 cri.go:89] found id: ""
	I1210 01:10:22.203665  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.203680  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:22.203689  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:22.203754  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:22.233632  133241 cri.go:89] found id: ""
	I1210 01:10:22.233660  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.233671  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:22.233679  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:22.233748  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:22.269679  133241 cri.go:89] found id: ""
	I1210 01:10:22.269704  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.269713  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:22.269719  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:22.269769  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:22.301819  133241 cri.go:89] found id: ""
	I1210 01:10:22.301850  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.301858  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:22.301864  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:22.301914  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:22.337435  133241 cri.go:89] found id: ""
	I1210 01:10:22.337470  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.337479  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:22.337494  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:22.337562  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:22.368920  133241 cri.go:89] found id: ""
	I1210 01:10:22.368944  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.368952  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:22.368957  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:22.369020  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:22.401157  133241 cri.go:89] found id: ""
	I1210 01:10:22.401188  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.401200  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:22.401211  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:22.401277  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:22.436278  133241 cri.go:89] found id: ""
	I1210 01:10:22.436317  133241 logs.go:282] 0 containers: []
	W1210 01:10:22.436330  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:22.436343  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:22.436359  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:22.485320  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:22.485354  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:22.498225  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:22.498253  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:22.559918  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:22.559944  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:22.559961  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:22.636884  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:22.636919  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:22.291705  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:24.790056  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:26.790414  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:22.322051  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:24.821800  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:23.756797  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:25.757544  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:25.173302  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:25.185398  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:25.185481  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:25.215003  133241 cri.go:89] found id: ""
	I1210 01:10:25.215030  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.215038  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:25.215044  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:25.215106  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:25.247583  133241 cri.go:89] found id: ""
	I1210 01:10:25.247604  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.247613  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:25.247620  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:25.247679  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:25.282125  133241 cri.go:89] found id: ""
	I1210 01:10:25.282150  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.282158  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:25.282163  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:25.282220  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:25.317560  133241 cri.go:89] found id: ""
	I1210 01:10:25.317590  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.317599  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:25.317605  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:25.317666  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:25.354392  133241 cri.go:89] found id: ""
	I1210 01:10:25.354418  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.354430  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:25.354441  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:25.354510  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:25.392349  133241 cri.go:89] found id: ""
	I1210 01:10:25.392375  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.392384  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:25.392390  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:25.392442  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:25.429665  133241 cri.go:89] found id: ""
	I1210 01:10:25.429692  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.429702  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:25.429709  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:25.429766  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:25.466437  133241 cri.go:89] found id: ""
	I1210 01:10:25.466463  133241 logs.go:282] 0 containers: []
	W1210 01:10:25.466476  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:25.466488  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:25.466503  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:25.480846  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:25.480885  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:25.548828  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:25.548861  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:25.548877  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:25.626942  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:25.626985  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:25.664081  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:25.664120  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:28.219032  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:28.233820  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:28.233886  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:28.267033  133241 cri.go:89] found id: ""
	I1210 01:10:28.267061  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.267072  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:28.267079  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:28.267133  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:28.304241  133241 cri.go:89] found id: ""
	I1210 01:10:28.304268  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.304276  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:28.304282  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:28.304329  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:28.339783  133241 cri.go:89] found id: ""
	I1210 01:10:28.339810  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.339817  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:28.339824  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:28.339897  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:28.371890  133241 cri.go:89] found id: ""
	I1210 01:10:28.371944  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.371957  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:28.371965  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:28.372033  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:28.409995  133241 cri.go:89] found id: ""
	I1210 01:10:28.410031  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.410042  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:28.410050  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:28.410122  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:28.443817  133241 cri.go:89] found id: ""
	I1210 01:10:28.443854  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.443866  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:28.443874  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:28.443943  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:28.476813  133241 cri.go:89] found id: ""
	I1210 01:10:28.476842  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.476850  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:28.476856  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:28.476918  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:28.509092  133241 cri.go:89] found id: ""
	I1210 01:10:28.509119  133241 logs.go:282] 0 containers: []
	W1210 01:10:28.509129  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:28.509147  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:28.509166  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:28.582990  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:28.583021  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:28.624120  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:28.624152  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:28.673901  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:28.673942  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:28.686654  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:28.686684  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:28.754914  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:28.790925  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:31.291799  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:27.321458  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:29.820474  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:31.820865  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:28.257390  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:30.757194  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:31.256019  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:31.269297  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:31.269374  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:31.306032  133241 cri.go:89] found id: ""
	I1210 01:10:31.306063  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.306074  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:31.306082  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:31.306149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:31.339930  133241 cri.go:89] found id: ""
	I1210 01:10:31.339964  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.339976  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:31.339984  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:31.340049  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:31.371820  133241 cri.go:89] found id: ""
	I1210 01:10:31.371853  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.371865  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:31.371872  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:31.371929  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:31.406853  133241 cri.go:89] found id: ""
	I1210 01:10:31.406880  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.406888  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:31.406895  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:31.406973  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:31.441927  133241 cri.go:89] found id: ""
	I1210 01:10:31.441964  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.441983  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:31.441993  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:31.442059  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:31.475302  133241 cri.go:89] found id: ""
	I1210 01:10:31.475335  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.475347  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:31.475356  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:31.475422  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:31.508445  133241 cri.go:89] found id: ""
	I1210 01:10:31.508479  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.508489  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:31.508495  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:31.508549  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:31.542658  133241 cri.go:89] found id: ""
	I1210 01:10:31.542686  133241 logs.go:282] 0 containers: []
	W1210 01:10:31.542694  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:31.542704  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:31.542720  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:31.591393  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:31.591432  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:31.604124  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:31.604152  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:31.670342  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:31.670381  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:31.670401  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:31.755216  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:31.755273  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:34.307218  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:34.321878  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:34.321951  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:34.355191  133241 cri.go:89] found id: ""
	I1210 01:10:34.355230  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.355238  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:34.355244  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:34.355300  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:34.392397  133241 cri.go:89] found id: ""
	I1210 01:10:34.392432  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.392445  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:34.392453  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:34.392522  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:34.424468  133241 cri.go:89] found id: ""
	I1210 01:10:34.424496  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.424513  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:34.424519  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:34.424568  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:34.456966  133241 cri.go:89] found id: ""
	I1210 01:10:34.456990  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.457000  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:34.457006  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:34.457057  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:34.491830  133241 cri.go:89] found id: ""
	I1210 01:10:34.491863  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.491874  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:34.491882  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:34.491949  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:34.523409  133241 cri.go:89] found id: ""
	I1210 01:10:34.523441  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.523455  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:34.523464  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:34.523520  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:34.555092  133241 cri.go:89] found id: ""
	I1210 01:10:34.555125  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.555136  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:34.555143  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:34.555211  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:34.585491  133241 cri.go:89] found id: ""
	I1210 01:10:34.585521  133241 logs.go:282] 0 containers: []
	W1210 01:10:34.585530  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:34.585540  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:34.585553  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:34.598250  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:34.598281  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:10:33.790899  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:35.791148  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:34.321870  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:36.821430  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:32.757323  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:35.256735  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:37.257310  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	W1210 01:10:34.662759  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:34.662784  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:34.662797  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:34.740495  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:34.740537  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:34.777192  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:34.777231  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:37.329212  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:37.342322  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:37.342397  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:37.374083  133241 cri.go:89] found id: ""
	I1210 01:10:37.374114  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.374124  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:37.374133  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:37.374202  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:37.404838  133241 cri.go:89] found id: ""
	I1210 01:10:37.404872  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.404880  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:37.404886  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:37.404948  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:37.439471  133241 cri.go:89] found id: ""
	I1210 01:10:37.439503  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.439515  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:37.439523  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:37.439598  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:37.473725  133241 cri.go:89] found id: ""
	I1210 01:10:37.473756  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.473765  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:37.473770  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:37.473822  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:37.507449  133241 cri.go:89] found id: ""
	I1210 01:10:37.507478  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.507491  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:37.507498  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:37.507565  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:37.538432  133241 cri.go:89] found id: ""
	I1210 01:10:37.538468  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.538479  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:37.538490  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:37.538583  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:37.571690  133241 cri.go:89] found id: ""
	I1210 01:10:37.571716  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.571724  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:37.571730  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:37.571787  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:37.606988  133241 cri.go:89] found id: ""
	I1210 01:10:37.607017  133241 logs.go:282] 0 containers: []
	W1210 01:10:37.607026  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:37.607036  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:37.607048  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:37.655260  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:37.655290  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:37.667647  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:37.667672  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:37.734898  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:37.734955  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:37.734971  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:37.823654  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:37.823690  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:37.792020  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:40.290220  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:39.323412  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:41.822486  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:39.759358  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:42.256854  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:40.361513  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:40.374995  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:40.375054  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:40.407043  133241 cri.go:89] found id: ""
	I1210 01:10:40.407077  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.407086  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:40.407091  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:40.407146  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:40.438613  133241 cri.go:89] found id: ""
	I1210 01:10:40.438644  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.438655  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:40.438663  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:40.438725  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:40.468747  133241 cri.go:89] found id: ""
	I1210 01:10:40.468781  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.468794  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:40.468801  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:40.468873  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:40.501670  133241 cri.go:89] found id: ""
	I1210 01:10:40.501700  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.501708  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:40.501714  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:40.501762  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:40.531671  133241 cri.go:89] found id: ""
	I1210 01:10:40.531694  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.531704  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:40.531712  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:40.531769  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:40.562804  133241 cri.go:89] found id: ""
	I1210 01:10:40.562827  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.562836  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:40.562847  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:40.562909  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:40.593286  133241 cri.go:89] found id: ""
	I1210 01:10:40.593309  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.593318  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:40.593323  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:40.593369  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:40.624387  133241 cri.go:89] found id: ""
	I1210 01:10:40.624424  133241 logs.go:282] 0 containers: []
	W1210 01:10:40.624438  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:40.624452  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:40.624479  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:40.636616  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:40.636643  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:40.703044  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:40.703071  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:40.703089  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:40.782186  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:40.782220  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:40.824410  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:40.824434  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:43.377460  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:43.391624  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:43.391704  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:43.424454  133241 cri.go:89] found id: ""
	I1210 01:10:43.424489  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.424499  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:43.424505  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:43.424570  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:43.454067  133241 cri.go:89] found id: ""
	I1210 01:10:43.454094  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.454102  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:43.454108  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:43.454160  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:43.485905  133241 cri.go:89] found id: ""
	I1210 01:10:43.485938  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.485949  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:43.485956  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:43.486021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:43.516402  133241 cri.go:89] found id: ""
	I1210 01:10:43.516427  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.516435  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:43.516447  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:43.516521  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:43.549049  133241 cri.go:89] found id: ""
	I1210 01:10:43.549102  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.549114  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:43.549124  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:43.549181  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:43.582610  133241 cri.go:89] found id: ""
	I1210 01:10:43.582641  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.582652  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:43.582661  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:43.582720  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:43.614392  133241 cri.go:89] found id: ""
	I1210 01:10:43.614424  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.614435  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:43.614442  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:43.614507  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:43.646797  133241 cri.go:89] found id: ""
	I1210 01:10:43.646830  133241 logs.go:282] 0 containers: []
	W1210 01:10:43.646842  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:43.646855  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:43.646872  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:43.682884  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:43.682921  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:43.739117  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:43.739159  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:43.754008  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:43.754047  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:43.825110  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:43.825140  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:43.825156  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:42.290697  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:44.790711  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.791942  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:44.321563  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.821954  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:44.756178  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.757399  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:46.401040  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:46.414417  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:46.414515  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:46.446832  133241 cri.go:89] found id: ""
	I1210 01:10:46.446861  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.446871  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:46.446879  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:46.446945  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:46.480534  133241 cri.go:89] found id: ""
	I1210 01:10:46.480566  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.480577  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:46.480584  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:46.480649  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:46.512706  133241 cri.go:89] found id: ""
	I1210 01:10:46.512735  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.512745  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:46.512752  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:46.512818  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:46.545769  133241 cri.go:89] found id: ""
	I1210 01:10:46.545803  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.545815  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:46.545823  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:46.545889  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:46.575715  133241 cri.go:89] found id: ""
	I1210 01:10:46.575750  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.575762  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:46.575769  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:46.575834  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:46.605133  133241 cri.go:89] found id: ""
	I1210 01:10:46.605164  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.605175  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:46.605183  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:46.605235  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:46.635536  133241 cri.go:89] found id: ""
	I1210 01:10:46.635571  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.635582  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:46.635589  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:46.635650  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:46.665579  133241 cri.go:89] found id: ""
	I1210 01:10:46.665608  133241 logs.go:282] 0 containers: []
	W1210 01:10:46.665617  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:46.665627  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:46.665637  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:46.749766  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:46.749806  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:46.788690  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:46.788725  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:46.841860  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:46.841888  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:46.870621  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:46.870651  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:46.943532  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:49.444707  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:49.457003  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:49.457071  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:49.489757  133241 cri.go:89] found id: ""
	I1210 01:10:49.489791  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.489802  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:49.489809  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:49.489859  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:49.519808  133241 cri.go:89] found id: ""
	I1210 01:10:49.519832  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.519839  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:49.519844  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:49.519895  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:49.552725  133241 cri.go:89] found id: ""
	I1210 01:10:49.552748  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.552756  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:49.552762  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:49.552816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:49.583657  133241 cri.go:89] found id: ""
	I1210 01:10:49.583686  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.583699  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:49.583710  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:49.583771  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:49.614520  133241 cri.go:89] found id: ""
	I1210 01:10:49.614547  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.614569  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:49.614579  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:49.614644  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:49.290385  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:51.291504  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:49.321277  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:51.321612  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:49.256723  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:51.257348  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:49.646739  133241 cri.go:89] found id: ""
	I1210 01:10:49.646788  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.646800  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:49.646811  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:49.646871  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:49.680156  133241 cri.go:89] found id: ""
	I1210 01:10:49.680184  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.680195  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:49.680203  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:49.680271  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:49.711052  133241 cri.go:89] found id: ""
	I1210 01:10:49.711090  133241 logs.go:282] 0 containers: []
	W1210 01:10:49.711103  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:49.711115  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:49.711133  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:49.765139  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:49.765173  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:49.777581  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:49.777612  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:49.842857  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:49.842882  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:49.842897  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:49.923492  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:49.923529  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:52.465282  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:52.478468  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:52.478535  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:52.514379  133241 cri.go:89] found id: ""
	I1210 01:10:52.514411  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.514420  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:52.514426  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:52.514481  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:52.545952  133241 cri.go:89] found id: ""
	I1210 01:10:52.545981  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.545991  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:52.545999  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:52.546063  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:52.581959  133241 cri.go:89] found id: ""
	I1210 01:10:52.581986  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.581995  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:52.582003  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:52.582109  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:52.634648  133241 cri.go:89] found id: ""
	I1210 01:10:52.634674  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.634686  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:52.634693  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:52.634753  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:52.668485  133241 cri.go:89] found id: ""
	I1210 01:10:52.668509  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.668518  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:52.668524  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:52.668587  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:52.702030  133241 cri.go:89] found id: ""
	I1210 01:10:52.702058  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.702067  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:52.702074  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:52.702139  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:52.736618  133241 cri.go:89] found id: ""
	I1210 01:10:52.736647  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.736655  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:52.736661  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:52.736728  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:52.769400  133241 cri.go:89] found id: ""
	I1210 01:10:52.769427  133241 logs.go:282] 0 containers: []
	W1210 01:10:52.769436  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:52.769444  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:52.769462  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:52.808900  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:52.808936  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:52.861032  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:52.861067  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:52.874251  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:52.874281  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:52.946117  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:52.946145  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:52.946174  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:53.790452  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:55.791486  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:53.820716  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:55.822118  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:53.756664  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:56.255828  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:55.526812  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:55.541146  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:55.541232  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:55.582382  133241 cri.go:89] found id: ""
	I1210 01:10:55.582414  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.582424  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:55.582430  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:55.582483  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:55.620756  133241 cri.go:89] found id: ""
	I1210 01:10:55.620781  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.620790  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:55.620795  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:55.620865  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:55.657136  133241 cri.go:89] found id: ""
	I1210 01:10:55.657173  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.657184  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:55.657192  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:55.657253  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:55.691809  133241 cri.go:89] found id: ""
	I1210 01:10:55.691836  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.691844  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:55.691850  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:55.691901  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:55.725747  133241 cri.go:89] found id: ""
	I1210 01:10:55.725782  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.725794  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:55.725802  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:55.725870  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:55.758656  133241 cri.go:89] found id: ""
	I1210 01:10:55.758686  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.758697  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:55.758704  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:55.758766  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:55.791407  133241 cri.go:89] found id: ""
	I1210 01:10:55.791437  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.791447  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:55.791453  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:55.791522  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:55.823238  133241 cri.go:89] found id: ""
	I1210 01:10:55.823259  133241 logs.go:282] 0 containers: []
	W1210 01:10:55.823269  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:55.823277  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:55.823288  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:55.858051  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:55.858090  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:55.910896  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:55.910928  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:55.923792  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:55.923814  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:55.994264  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:55.994283  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:55.994297  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:58.570410  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:10:58.582632  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:10:58.582709  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:10:58.614706  133241 cri.go:89] found id: ""
	I1210 01:10:58.614741  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.614752  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:10:58.614759  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:10:58.614820  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:10:58.645853  133241 cri.go:89] found id: ""
	I1210 01:10:58.645880  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.645888  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:10:58.645893  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:10:58.645946  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:10:58.681278  133241 cri.go:89] found id: ""
	I1210 01:10:58.681305  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.681313  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:10:58.681319  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:10:58.681376  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:10:58.715312  133241 cri.go:89] found id: ""
	I1210 01:10:58.715344  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.715356  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:10:58.715364  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:10:58.715434  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:10:58.753150  133241 cri.go:89] found id: ""
	I1210 01:10:58.753182  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.753193  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:10:58.753201  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:10:58.753275  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:10:58.792337  133241 cri.go:89] found id: ""
	I1210 01:10:58.792363  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.792371  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:10:58.792377  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:10:58.792424  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:10:58.824538  133241 cri.go:89] found id: ""
	I1210 01:10:58.824562  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.824569  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:10:58.824575  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:10:58.824626  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:10:58.859699  133241 cri.go:89] found id: ""
	I1210 01:10:58.859733  133241 logs.go:282] 0 containers: []
	W1210 01:10:58.859745  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:10:58.859755  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:10:58.859768  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:10:58.874557  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:10:58.874607  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:10:58.942377  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:10:58.942399  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:10:58.942413  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:10:59.020700  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:10:59.020743  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:10:59.092780  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:10:59.092820  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:10:58.290069  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:00.290277  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:58.321783  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:00.820779  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:10:58.256816  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:00.756307  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:01.656942  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:01.670706  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:01.670790  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:01.704182  133241 cri.go:89] found id: ""
	I1210 01:11:01.704222  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.704235  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:01.704242  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:01.704295  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:01.737176  133241 cri.go:89] found id: ""
	I1210 01:11:01.737207  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.737216  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:01.737222  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:01.737279  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:01.771891  133241 cri.go:89] found id: ""
	I1210 01:11:01.771924  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.771935  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:01.771943  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:01.772001  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:01.804964  133241 cri.go:89] found id: ""
	I1210 01:11:01.804994  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.805005  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:01.805026  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:01.805101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:01.837156  133241 cri.go:89] found id: ""
	I1210 01:11:01.837184  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.837195  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:01.837203  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:01.837260  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:01.866759  133241 cri.go:89] found id: ""
	I1210 01:11:01.866783  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.866793  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:01.866802  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:01.866868  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:01.897349  133241 cri.go:89] found id: ""
	I1210 01:11:01.897377  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.897387  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:01.897394  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:01.897452  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:01.928390  133241 cri.go:89] found id: ""
	I1210 01:11:01.928419  133241 logs.go:282] 0 containers: []
	W1210 01:11:01.928430  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:01.928442  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:01.928462  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:01.995531  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:01.995558  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:01.995572  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:02.073144  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:02.073178  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:02.107235  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:02.107266  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:02.159959  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:02.159993  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:02.789938  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:04.790544  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:02.821058  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:04.822126  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:02.756968  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:05.255943  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:07.256779  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:04.672775  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:04.686495  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:04.686604  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:04.720867  133241 cri.go:89] found id: ""
	I1210 01:11:04.720977  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.721005  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:04.721034  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:04.721143  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:04.757796  133241 cri.go:89] found id: ""
	I1210 01:11:04.757823  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.757831  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:04.757837  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:04.757896  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:04.799823  133241 cri.go:89] found id: ""
	I1210 01:11:04.799848  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.799856  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:04.799861  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:04.799921  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:04.848259  133241 cri.go:89] found id: ""
	I1210 01:11:04.848291  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.848303  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:04.848312  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:04.848392  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:04.898530  133241 cri.go:89] found id: ""
	I1210 01:11:04.898583  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.898596  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:04.898605  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:04.898673  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:04.935954  133241 cri.go:89] found id: ""
	I1210 01:11:04.935979  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.935987  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:04.935992  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:04.936037  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:04.970503  133241 cri.go:89] found id: ""
	I1210 01:11:04.970531  133241 logs.go:282] 0 containers: []
	W1210 01:11:04.970538  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:04.970544  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:04.970627  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:05.003257  133241 cri.go:89] found id: ""
	I1210 01:11:05.003280  133241 logs.go:282] 0 containers: []
	W1210 01:11:05.003289  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:05.003298  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:05.003311  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:05.053816  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:05.053849  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:05.066024  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:05.066056  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:05.129515  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:05.129542  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:05.129559  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:05.203823  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:05.203861  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:07.743773  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:07.756948  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:07.757021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:07.790298  133241 cri.go:89] found id: ""
	I1210 01:11:07.790326  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.790334  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:07.790341  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:07.790432  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:07.822653  133241 cri.go:89] found id: ""
	I1210 01:11:07.822682  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.822693  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:07.822700  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:07.822754  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:07.856125  133241 cri.go:89] found id: ""
	I1210 01:11:07.856160  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.856171  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:07.856178  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:07.856247  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:07.888297  133241 cri.go:89] found id: ""
	I1210 01:11:07.888321  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.888329  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:07.888336  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:07.888394  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:07.919131  133241 cri.go:89] found id: ""
	I1210 01:11:07.919159  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.919170  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:07.919177  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:07.919245  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:07.954289  133241 cri.go:89] found id: ""
	I1210 01:11:07.954320  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.954332  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:07.954340  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:07.954396  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:07.985447  133241 cri.go:89] found id: ""
	I1210 01:11:07.985482  133241 logs.go:282] 0 containers: []
	W1210 01:11:07.985497  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:07.985505  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:07.985560  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:08.016461  133241 cri.go:89] found id: ""
	I1210 01:11:08.016491  133241 logs.go:282] 0 containers: []
	W1210 01:11:08.016504  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:08.016516  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:08.016534  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:08.051346  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:08.051386  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:08.101708  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:08.101741  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:08.113883  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:08.113912  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:08.174656  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:08.174681  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:08.174696  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:07.289462  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:09.290707  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:11.790555  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:07.322137  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:09.821004  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:11.821064  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:09.757877  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:12.256156  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:10.751754  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:10.768007  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:10.768071  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:10.814141  133241 cri.go:89] found id: ""
	I1210 01:11:10.814167  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.814177  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:10.814187  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:10.814255  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:10.864355  133241 cri.go:89] found id: ""
	I1210 01:11:10.864379  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.864387  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:10.864392  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:10.864464  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:10.917533  133241 cri.go:89] found id: ""
	I1210 01:11:10.917563  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.917572  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:10.917579  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:10.917644  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:10.949555  133241 cri.go:89] found id: ""
	I1210 01:11:10.949589  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.949601  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:10.949609  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:10.949668  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:10.982997  133241 cri.go:89] found id: ""
	I1210 01:11:10.983022  133241 logs.go:282] 0 containers: []
	W1210 01:11:10.983030  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:10.983036  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:10.983101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:11.016318  133241 cri.go:89] found id: ""
	I1210 01:11:11.016348  133241 logs.go:282] 0 containers: []
	W1210 01:11:11.016359  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:11.016366  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:11.016460  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:11.045980  133241 cri.go:89] found id: ""
	I1210 01:11:11.046004  133241 logs.go:282] 0 containers: []
	W1210 01:11:11.046012  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:11.046018  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:11.046067  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:11.074303  133241 cri.go:89] found id: ""
	I1210 01:11:11.074329  133241 logs.go:282] 0 containers: []
	W1210 01:11:11.074336  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:11.074346  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:11.074357  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:11.108874  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:11.108907  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:11.156642  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:11.156672  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:11.168505  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:11.168527  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:11.239949  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:11.239976  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:11.239994  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:13.828538  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:13.841876  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:13.841929  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:13.872854  133241 cri.go:89] found id: ""
	I1210 01:11:13.872884  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.872896  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:13.872904  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:13.872955  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:13.903759  133241 cri.go:89] found id: ""
	I1210 01:11:13.903790  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.903803  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:13.903812  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:13.903877  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:13.938898  133241 cri.go:89] found id: ""
	I1210 01:11:13.938921  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.938929  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:13.938934  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:13.938992  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:13.979322  133241 cri.go:89] found id: ""
	I1210 01:11:13.979343  133241 logs.go:282] 0 containers: []
	W1210 01:11:13.979351  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:13.979358  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:13.979419  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:14.012959  133241 cri.go:89] found id: ""
	I1210 01:11:14.012984  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.012993  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:14.012999  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:14.013048  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:14.050248  133241 cri.go:89] found id: ""
	I1210 01:11:14.050274  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.050282  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:14.050288  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:14.050337  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:14.086029  133241 cri.go:89] found id: ""
	I1210 01:11:14.086061  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.086072  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:14.086080  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:14.086149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:14.119966  133241 cri.go:89] found id: ""
	I1210 01:11:14.119994  133241 logs.go:282] 0 containers: []
	W1210 01:11:14.120002  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:14.120012  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:14.120025  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:14.133378  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:14.133406  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:14.199060  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:14.199093  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:14.199108  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:14.282056  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:14.282089  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:14.321155  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:14.321182  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:13.790898  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.290292  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:13.821872  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.320917  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:14.257094  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.755448  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:16.871040  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:16.882350  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:16.882417  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:16.911877  133241 cri.go:89] found id: ""
	I1210 01:11:16.911910  133241 logs.go:282] 0 containers: []
	W1210 01:11:16.911922  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:16.911930  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:16.911993  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:16.946898  133241 cri.go:89] found id: ""
	I1210 01:11:16.946931  133241 logs.go:282] 0 containers: []
	W1210 01:11:16.946945  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:16.946952  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:16.947021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:16.979154  133241 cri.go:89] found id: ""
	I1210 01:11:16.979185  133241 logs.go:282] 0 containers: []
	W1210 01:11:16.979196  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:16.979209  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:16.979293  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:17.008977  133241 cri.go:89] found id: ""
	I1210 01:11:17.009010  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.009021  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:17.009028  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:17.009093  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:17.041399  133241 cri.go:89] found id: ""
	I1210 01:11:17.041431  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.041440  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:17.041446  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:17.041505  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:17.074254  133241 cri.go:89] found id: ""
	I1210 01:11:17.074284  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.074295  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:17.074305  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:17.074385  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:17.104982  133241 cri.go:89] found id: ""
	I1210 01:11:17.105015  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.105025  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:17.105033  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:17.105094  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:17.135240  133241 cri.go:89] found id: ""
	I1210 01:11:17.135265  133241 logs.go:282] 0 containers: []
	W1210 01:11:17.135275  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:17.135286  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:17.135298  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:17.186952  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:17.187004  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:17.201444  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:17.201472  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:17.272210  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:17.272229  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:17.272245  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:17.355218  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:17.355256  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:18.290407  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:20.292289  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:18.321390  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:20.321550  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:18.756823  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:21.256882  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:19.892863  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:19.905069  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:19.905138  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:19.943515  133241 cri.go:89] found id: ""
	I1210 01:11:19.943544  133241 logs.go:282] 0 containers: []
	W1210 01:11:19.943557  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:19.943566  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:19.943629  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:19.974474  133241 cri.go:89] found id: ""
	I1210 01:11:19.974499  133241 logs.go:282] 0 containers: []
	W1210 01:11:19.974509  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:19.974517  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:19.974597  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:20.008980  133241 cri.go:89] found id: ""
	I1210 01:11:20.009011  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.009023  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:20.009030  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:20.009097  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:20.040655  133241 cri.go:89] found id: ""
	I1210 01:11:20.040681  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.040690  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:20.040696  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:20.040745  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:20.073761  133241 cri.go:89] found id: ""
	I1210 01:11:20.073788  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.073799  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:20.073806  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:20.073873  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:20.104381  133241 cri.go:89] found id: ""
	I1210 01:11:20.104410  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.104421  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:20.104429  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:20.104489  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:20.138130  133241 cri.go:89] found id: ""
	I1210 01:11:20.138158  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.138167  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:20.138173  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:20.138229  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:20.166883  133241 cri.go:89] found id: ""
	I1210 01:11:20.166908  133241 logs.go:282] 0 containers: []
	W1210 01:11:20.166916  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:20.166926  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:20.166940  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:20.199437  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:20.199470  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:20.247384  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:20.247418  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:20.260363  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:20.260392  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:20.330260  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:20.330283  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:20.330299  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:22.912818  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:22.925241  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:22.925316  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:22.957975  133241 cri.go:89] found id: ""
	I1210 01:11:22.958003  133241 logs.go:282] 0 containers: []
	W1210 01:11:22.958015  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:22.958023  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:22.958087  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:22.991067  133241 cri.go:89] found id: ""
	I1210 01:11:22.991098  133241 logs.go:282] 0 containers: []
	W1210 01:11:22.991109  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:22.991117  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:22.991177  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:23.022191  133241 cri.go:89] found id: ""
	I1210 01:11:23.022280  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.022297  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:23.022307  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:23.022373  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:23.055399  133241 cri.go:89] found id: ""
	I1210 01:11:23.055427  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.055435  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:23.055440  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:23.055504  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:23.085084  133241 cri.go:89] found id: ""
	I1210 01:11:23.085114  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.085126  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:23.085133  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:23.085195  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:23.114896  133241 cri.go:89] found id: ""
	I1210 01:11:23.114921  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.114929  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:23.114935  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:23.114995  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:23.146419  133241 cri.go:89] found id: ""
	I1210 01:11:23.146450  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.146463  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:23.146470  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:23.146546  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:23.178747  133241 cri.go:89] found id: ""
	I1210 01:11:23.178774  133241 logs.go:282] 0 containers: []
	W1210 01:11:23.178782  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:23.178792  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:23.178804  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:23.230574  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:23.230609  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:23.242622  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:23.242649  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:23.315830  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:23.315850  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:23.315862  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:23.394054  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:23.394091  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:22.790004  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:24.790395  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:26.790583  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:22.821008  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:25.321294  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:23.758460  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:26.257243  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:25.930799  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:25.943287  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:25.943351  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:25.975836  133241 cri.go:89] found id: ""
	I1210 01:11:25.975866  133241 logs.go:282] 0 containers: []
	W1210 01:11:25.975877  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:25.975884  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:25.975948  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:26.008518  133241 cri.go:89] found id: ""
	I1210 01:11:26.008545  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.008553  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:26.008560  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:26.008607  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:26.041953  133241 cri.go:89] found id: ""
	I1210 01:11:26.041992  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.042002  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:26.042009  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:26.042076  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:26.071782  133241 cri.go:89] found id: ""
	I1210 01:11:26.071809  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.071821  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:26.071829  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:26.071894  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:26.101051  133241 cri.go:89] found id: ""
	I1210 01:11:26.101075  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.101084  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:26.101089  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:26.101135  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:26.135274  133241 cri.go:89] found id: ""
	I1210 01:11:26.135300  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.135308  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:26.135315  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:26.135368  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:26.168190  133241 cri.go:89] found id: ""
	I1210 01:11:26.168216  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.168224  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:26.168230  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:26.168293  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:26.198453  133241 cri.go:89] found id: ""
	I1210 01:11:26.198482  133241 logs.go:282] 0 containers: []
	W1210 01:11:26.198492  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:26.198505  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:26.198524  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:26.211436  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:26.211460  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:26.273940  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:26.273964  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:26.273980  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:26.353198  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:26.353232  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:26.389823  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:26.389857  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:28.940375  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:28.952619  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:28.952676  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:28.984886  133241 cri.go:89] found id: ""
	I1210 01:11:28.984914  133241 logs.go:282] 0 containers: []
	W1210 01:11:28.984923  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:28.984929  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:28.984978  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:29.015424  133241 cri.go:89] found id: ""
	I1210 01:11:29.015453  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.015463  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:29.015469  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:29.015520  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:29.045941  133241 cri.go:89] found id: ""
	I1210 01:11:29.045977  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.045989  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:29.045997  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:29.046065  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:29.077346  133241 cri.go:89] found id: ""
	I1210 01:11:29.077375  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.077384  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:29.077389  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:29.077442  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:29.109825  133241 cri.go:89] found id: ""
	I1210 01:11:29.109861  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.109873  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:29.109880  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:29.109946  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:29.141601  133241 cri.go:89] found id: ""
	I1210 01:11:29.141633  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.141645  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:29.141656  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:29.141720  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:29.172711  133241 cri.go:89] found id: ""
	I1210 01:11:29.172747  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.172758  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:29.172766  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:29.172830  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:29.205247  133241 cri.go:89] found id: ""
	I1210 01:11:29.205272  133241 logs.go:282] 0 containers: []
	W1210 01:11:29.205283  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:29.205296  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:29.205310  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:29.255917  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:29.255954  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:29.269246  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:29.269276  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:29.339509  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:29.339535  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:29.339550  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:29.414320  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:29.414358  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:29.291191  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:31.790102  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:27.820810  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:30.321256  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:28.756034  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:30.757633  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:31.950667  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:31.963020  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:31.963083  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:31.994537  133241 cri.go:89] found id: ""
	I1210 01:11:31.994586  133241 logs.go:282] 0 containers: []
	W1210 01:11:31.994598  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:31.994606  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:31.994672  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:32.028601  133241 cri.go:89] found id: ""
	I1210 01:11:32.028632  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.028643  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:32.028651  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:32.028710  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:32.060238  133241 cri.go:89] found id: ""
	I1210 01:11:32.060265  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.060273  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:32.060280  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:32.060344  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:32.094421  133241 cri.go:89] found id: ""
	I1210 01:11:32.094446  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.094454  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:32.094460  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:32.094509  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:32.128237  133241 cri.go:89] found id: ""
	I1210 01:11:32.128266  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.128277  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:32.128285  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:32.128355  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:32.163139  133241 cri.go:89] found id: ""
	I1210 01:11:32.163163  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.163172  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:32.163179  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:32.163237  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:32.194077  133241 cri.go:89] found id: ""
	I1210 01:11:32.194108  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.194119  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:32.194126  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:32.194187  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:32.224914  133241 cri.go:89] found id: ""
	I1210 01:11:32.224941  133241 logs.go:282] 0 containers: []
	W1210 01:11:32.224952  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:32.224964  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:32.224980  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:32.275194  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:32.275230  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:32.287642  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:32.287670  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:32.350922  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:32.350953  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:32.350971  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:32.431573  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:32.431610  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:33.790816  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:35.791330  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:32.321475  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:34.823056  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:33.256524  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:35.755851  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:34.969741  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:34.982487  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:34.982541  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:35.015370  133241 cri.go:89] found id: ""
	I1210 01:11:35.015408  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.015419  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:35.015428  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:35.015494  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:35.047381  133241 cri.go:89] found id: ""
	I1210 01:11:35.047418  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.047430  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:35.047437  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:35.047501  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:35.077282  133241 cri.go:89] found id: ""
	I1210 01:11:35.077305  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.077314  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:35.077320  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:35.077380  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:35.107625  133241 cri.go:89] found id: ""
	I1210 01:11:35.107653  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.107664  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:35.107671  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:35.107723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:35.137919  133241 cri.go:89] found id: ""
	I1210 01:11:35.137949  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.137962  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:35.137970  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:35.138037  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:35.170914  133241 cri.go:89] found id: ""
	I1210 01:11:35.170939  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.170947  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:35.170962  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:35.171021  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:35.201719  133241 cri.go:89] found id: ""
	I1210 01:11:35.201747  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.201755  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:35.201761  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:35.201821  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:35.230544  133241 cri.go:89] found id: ""
	I1210 01:11:35.230582  133241 logs.go:282] 0 containers: []
	W1210 01:11:35.230595  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:35.230607  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:35.230622  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:35.243184  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:35.243210  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:35.311888  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:35.311915  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:35.311931  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:35.387377  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:35.387411  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:35.424087  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:35.424121  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:37.977530  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:37.989741  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:37.989811  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:38.023765  133241 cri.go:89] found id: ""
	I1210 01:11:38.023789  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.023799  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:38.023808  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:38.023871  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:38.060456  133241 cri.go:89] found id: ""
	I1210 01:11:38.060487  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.060498  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:38.060505  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:38.060558  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:38.092589  133241 cri.go:89] found id: ""
	I1210 01:11:38.092612  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.092620  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:38.092626  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:38.092676  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:38.126075  133241 cri.go:89] found id: ""
	I1210 01:11:38.126115  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.126127  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:38.126137  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:38.126216  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:38.158861  133241 cri.go:89] found id: ""
	I1210 01:11:38.158892  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.158905  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:38.158911  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:38.158966  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:38.189136  133241 cri.go:89] found id: ""
	I1210 01:11:38.189164  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.189172  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:38.189178  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:38.189227  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:38.220497  133241 cri.go:89] found id: ""
	I1210 01:11:38.220522  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.220530  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:38.220536  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:38.220585  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:38.253480  133241 cri.go:89] found id: ""
	I1210 01:11:38.253515  133241 logs.go:282] 0 containers: []
	W1210 01:11:38.253527  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:38.253539  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:38.253554  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:38.334967  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:38.335006  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:38.375521  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:38.375551  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:38.429375  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:38.429419  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:38.442488  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:38.442527  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:38.504243  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:38.290594  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:40.290705  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:37.322067  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:39.822004  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:37.756517  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:40.256112  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:42.256624  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:41.005015  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:41.018073  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:41.018149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:41.049377  133241 cri.go:89] found id: ""
	I1210 01:11:41.049409  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.049421  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:41.049429  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:41.049495  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:41.080430  133241 cri.go:89] found id: ""
	I1210 01:11:41.080466  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.080476  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:41.080482  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:41.080543  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:41.113179  133241 cri.go:89] found id: ""
	I1210 01:11:41.113210  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.113222  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:41.113229  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:41.113298  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:41.144493  133241 cri.go:89] found id: ""
	I1210 01:11:41.144523  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.144535  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:41.144545  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:41.144612  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:41.174786  133241 cri.go:89] found id: ""
	I1210 01:11:41.174818  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.174828  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:41.174835  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:41.174903  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:41.205010  133241 cri.go:89] found id: ""
	I1210 01:11:41.205050  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.205063  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:41.205072  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:41.205142  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:41.236095  133241 cri.go:89] found id: ""
	I1210 01:11:41.236120  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.236131  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:41.236138  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:41.236200  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:41.267610  133241 cri.go:89] found id: ""
	I1210 01:11:41.267639  133241 logs.go:282] 0 containers: []
	W1210 01:11:41.267654  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:41.267665  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:41.267681  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:41.302639  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:41.302669  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:41.352311  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:41.352343  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:41.365111  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:41.365140  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:41.434174  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:41.434197  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:41.434214  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:44.018219  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:44.030886  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:44.030961  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:44.072932  133241 cri.go:89] found id: ""
	I1210 01:11:44.072954  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.072962  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:44.072968  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:44.073015  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:44.110425  133241 cri.go:89] found id: ""
	I1210 01:11:44.110456  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.110466  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:44.110473  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:44.110539  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:44.148811  133241 cri.go:89] found id: ""
	I1210 01:11:44.148837  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.148848  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:44.148855  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:44.148922  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:44.184181  133241 cri.go:89] found id: ""
	I1210 01:11:44.184205  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.184213  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:44.184219  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:44.184268  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:44.213545  133241 cri.go:89] found id: ""
	I1210 01:11:44.213578  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.213590  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:44.213597  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:44.213658  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:44.246979  133241 cri.go:89] found id: ""
	I1210 01:11:44.247012  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.247024  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:44.247032  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:44.247095  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:44.280902  133241 cri.go:89] found id: ""
	I1210 01:11:44.280939  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.280950  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:44.280958  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:44.281035  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:44.310824  133241 cri.go:89] found id: ""
	I1210 01:11:44.310848  133241 logs.go:282] 0 containers: []
	W1210 01:11:44.310859  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:44.310870  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:44.310887  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:44.389324  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:44.389354  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:44.425351  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:44.425388  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:44.478151  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:44.478197  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:44.491139  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:44.491171  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:44.552150  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:42.790792  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:45.289730  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:42.321108  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:44.321367  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:46.820868  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:44.258348  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:46.756838  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:47.052917  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:47.065698  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:47.065764  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:47.098483  133241 cri.go:89] found id: ""
	I1210 01:11:47.098518  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.098530  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:47.098538  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:47.098617  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:47.129042  133241 cri.go:89] found id: ""
	I1210 01:11:47.129073  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.129082  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:47.129088  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:47.129157  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:47.160050  133241 cri.go:89] found id: ""
	I1210 01:11:47.160083  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.160094  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:47.160101  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:47.160167  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:47.190078  133241 cri.go:89] found id: ""
	I1210 01:11:47.190111  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.190120  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:47.190126  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:47.190180  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:47.218975  133241 cri.go:89] found id: ""
	I1210 01:11:47.219007  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.219020  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:47.219028  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:47.219088  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:47.248644  133241 cri.go:89] found id: ""
	I1210 01:11:47.248679  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.248689  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:47.248694  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:47.248743  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:47.284306  133241 cri.go:89] found id: ""
	I1210 01:11:47.284332  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.284339  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:47.284345  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:47.284394  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:47.314682  133241 cri.go:89] found id: ""
	I1210 01:11:47.314704  133241 logs.go:282] 0 containers: []
	W1210 01:11:47.314712  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:47.314721  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:47.314733  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:47.365334  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:47.365364  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:47.378184  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:47.378215  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:47.445591  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:47.445619  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:47.445642  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:47.523176  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:47.523214  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:47.291212  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:49.790326  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:51.790425  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:48.821947  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:51.321998  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:49.255902  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:51.256638  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:50.059060  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:50.071413  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:50.071489  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:50.104600  133241 cri.go:89] found id: ""
	I1210 01:11:50.104632  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.104644  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:50.104652  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:50.104715  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:50.136915  133241 cri.go:89] found id: ""
	I1210 01:11:50.136947  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.136957  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:50.136968  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:50.137038  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:50.172552  133241 cri.go:89] found id: ""
	I1210 01:11:50.172582  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.172593  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:50.172604  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:50.172668  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:50.202583  133241 cri.go:89] found id: ""
	I1210 01:11:50.202613  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.202626  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:50.202634  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:50.202696  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:50.232446  133241 cri.go:89] found id: ""
	I1210 01:11:50.232473  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.232483  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:50.232491  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:50.232555  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:50.271296  133241 cri.go:89] found id: ""
	I1210 01:11:50.271321  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.271332  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:50.271340  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:50.271404  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:50.304185  133241 cri.go:89] found id: ""
	I1210 01:11:50.304216  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.304227  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:50.304235  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:50.304298  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:50.338004  133241 cri.go:89] found id: ""
	I1210 01:11:50.338030  133241 logs.go:282] 0 containers: []
	W1210 01:11:50.338041  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:50.338051  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:50.338066  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:50.374374  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:50.374403  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:50.427315  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:50.427346  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:50.439862  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:50.439890  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:50.505410  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:50.505441  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:50.505458  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:53.081065  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:53.093760  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:53.093816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:53.126125  133241 cri.go:89] found id: ""
	I1210 01:11:53.126160  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.126172  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:53.126180  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:53.126252  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:53.157694  133241 cri.go:89] found id: ""
	I1210 01:11:53.157719  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.157727  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:53.157732  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:53.157787  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:53.188784  133241 cri.go:89] found id: ""
	I1210 01:11:53.188812  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.188820  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:53.188826  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:53.188882  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:53.220025  133241 cri.go:89] found id: ""
	I1210 01:11:53.220056  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.220066  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:53.220074  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:53.220133  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:53.254601  133241 cri.go:89] found id: ""
	I1210 01:11:53.254632  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.254641  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:53.254649  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:53.254718  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:53.286858  133241 cri.go:89] found id: ""
	I1210 01:11:53.286896  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.286906  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:53.286917  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:53.286979  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:53.322063  133241 cri.go:89] found id: ""
	I1210 01:11:53.322087  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.322096  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:53.322104  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:53.322175  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:53.353598  133241 cri.go:89] found id: ""
	I1210 01:11:53.353624  133241 logs.go:282] 0 containers: []
	W1210 01:11:53.353632  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:53.353641  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:53.353653  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:53.400634  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:53.400660  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:53.412838  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:53.412870  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:53.475152  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:53.475176  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:53.475191  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:53.551193  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:53.551236  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:54.290077  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:56.290911  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:53.322201  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:55.821982  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:53.257982  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:55.756075  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:56.089703  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:56.102065  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:56.102158  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:56.137385  133241 cri.go:89] found id: ""
	I1210 01:11:56.137410  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.137418  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:56.137424  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:56.137489  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:56.173717  133241 cri.go:89] found id: ""
	I1210 01:11:56.173748  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.173756  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:56.173762  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:56.173823  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:56.209007  133241 cri.go:89] found id: ""
	I1210 01:11:56.209031  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.209038  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:56.209044  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:56.209106  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:56.247599  133241 cri.go:89] found id: ""
	I1210 01:11:56.247628  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.247636  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:56.247642  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:56.247701  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:56.279510  133241 cri.go:89] found id: ""
	I1210 01:11:56.279535  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.279544  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:56.279550  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:56.279600  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:56.311644  133241 cri.go:89] found id: ""
	I1210 01:11:56.311665  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.311672  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:56.311678  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:56.311722  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:56.343277  133241 cri.go:89] found id: ""
	I1210 01:11:56.343306  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.343317  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:56.343324  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:56.343384  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:56.396352  133241 cri.go:89] found id: ""
	I1210 01:11:56.396380  133241 logs.go:282] 0 containers: []
	W1210 01:11:56.396388  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:56.396397  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:56.396409  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:56.408726  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:56.408754  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:56.483943  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:56.483970  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:56.483987  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:56.566841  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:56.566874  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:11:56.604048  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:56.604083  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:59.154979  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:11:59.167727  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:11:59.167803  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:11:59.198861  133241 cri.go:89] found id: ""
	I1210 01:11:59.198886  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.198894  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:11:59.198901  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:11:59.198953  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:11:59.232900  133241 cri.go:89] found id: ""
	I1210 01:11:59.232935  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.232947  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:11:59.232955  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:11:59.233024  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:11:59.267532  133241 cri.go:89] found id: ""
	I1210 01:11:59.267558  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.267566  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:11:59.267571  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:11:59.267633  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:11:59.298091  133241 cri.go:89] found id: ""
	I1210 01:11:59.298120  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.298130  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:11:59.298140  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:11:59.298199  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:11:59.327848  133241 cri.go:89] found id: ""
	I1210 01:11:59.327879  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.327889  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:11:59.327897  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:11:59.327957  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:11:59.356570  133241 cri.go:89] found id: ""
	I1210 01:11:59.356601  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.356617  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:11:59.356626  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:11:59.356686  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:11:59.387756  133241 cri.go:89] found id: ""
	I1210 01:11:59.387780  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.387788  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:11:59.387793  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:11:59.387843  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:11:59.419836  133241 cri.go:89] found id: ""
	I1210 01:11:59.419869  133241 logs.go:282] 0 containers: []
	W1210 01:11:59.419878  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:11:59.419887  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:11:59.419902  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:11:59.469663  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:11:59.469697  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:11:59.482738  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:11:59.482768  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:11:59.548687  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:11:59.548717  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:11:59.548739  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:11:58.790282  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:01.290379  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:58.320794  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:00.821991  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:57.756197  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:00.256511  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:11:59.625772  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:11:59.625809  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:02.163527  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:02.175510  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:02.175569  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:02.209432  133241 cri.go:89] found id: ""
	I1210 01:12:02.209462  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.209474  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:02.209481  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:02.209535  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:02.241027  133241 cri.go:89] found id: ""
	I1210 01:12:02.241050  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.241059  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:02.241064  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:02.241113  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:02.272251  133241 cri.go:89] found id: ""
	I1210 01:12:02.272277  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.272286  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:02.272293  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:02.272355  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:02.305879  133241 cri.go:89] found id: ""
	I1210 01:12:02.305903  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.305913  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:02.305920  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:02.305978  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:02.339219  133241 cri.go:89] found id: ""
	I1210 01:12:02.339248  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.339263  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:02.339271  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:02.339333  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:02.375203  133241 cri.go:89] found id: ""
	I1210 01:12:02.375240  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.375252  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:02.375260  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:02.375326  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:02.406364  133241 cri.go:89] found id: ""
	I1210 01:12:02.406396  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.406406  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:02.406413  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:02.406472  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:02.441572  133241 cri.go:89] found id: ""
	I1210 01:12:02.441602  133241 logs.go:282] 0 containers: []
	W1210 01:12:02.441614  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:02.441627  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:02.441642  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:02.454215  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:02.454241  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:02.526345  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:02.526368  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:02.526388  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:02.603813  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:02.603845  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:02.640102  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:02.640136  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:03.291135  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.792322  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:03.321084  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.322066  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:02.755961  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.256774  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:05.189319  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:05.201957  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:05.202022  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:05.242211  133241 cri.go:89] found id: ""
	I1210 01:12:05.242238  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.242247  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:05.242253  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:05.242300  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:05.277287  133241 cri.go:89] found id: ""
	I1210 01:12:05.277309  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.277317  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:05.277323  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:05.277382  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:05.309455  133241 cri.go:89] found id: ""
	I1210 01:12:05.309480  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.309488  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:05.309493  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:05.309540  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:05.344117  133241 cri.go:89] found id: ""
	I1210 01:12:05.344143  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.344156  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:05.344164  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:05.344222  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:05.375039  133241 cri.go:89] found id: ""
	I1210 01:12:05.375067  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.375079  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:05.375086  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:05.375146  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:05.407623  133241 cri.go:89] found id: ""
	I1210 01:12:05.407649  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.407657  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:05.407665  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:05.407723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:05.441018  133241 cri.go:89] found id: ""
	I1210 01:12:05.441047  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.441055  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:05.441061  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:05.441123  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:05.471864  133241 cri.go:89] found id: ""
	I1210 01:12:05.471895  133241 logs.go:282] 0 containers: []
	W1210 01:12:05.471907  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:05.471918  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:05.471931  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:05.536855  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:05.536881  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:05.536896  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:05.617577  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:05.617612  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:05.654150  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:05.654188  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:05.707690  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:05.707720  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:08.220391  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:08.232904  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:08.232961  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:08.271892  133241 cri.go:89] found id: ""
	I1210 01:12:08.271921  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.271933  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:08.271939  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:08.272004  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:08.304534  133241 cri.go:89] found id: ""
	I1210 01:12:08.304556  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.304563  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:08.304569  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:08.304620  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:08.338410  133241 cri.go:89] found id: ""
	I1210 01:12:08.338441  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.338451  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:08.338459  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:08.338523  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:08.370412  133241 cri.go:89] found id: ""
	I1210 01:12:08.370438  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.370449  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:08.370456  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:08.370515  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:08.401137  133241 cri.go:89] found id: ""
	I1210 01:12:08.401161  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.401169  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:08.401175  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:08.401224  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:08.436185  133241 cri.go:89] found id: ""
	I1210 01:12:08.436220  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.436232  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:08.436241  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:08.436308  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:08.468648  133241 cri.go:89] found id: ""
	I1210 01:12:08.468677  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.468696  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:08.468704  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:08.468764  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:08.506817  133241 cri.go:89] found id: ""
	I1210 01:12:08.506852  133241 logs.go:282] 0 containers: []
	W1210 01:12:08.506865  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:08.506878  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:08.506898  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:08.565209  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:08.565240  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:08.581630  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:08.581675  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:08.663163  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:08.663189  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:08.663201  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:08.744843  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:08.744888  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:08.290806  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:10.790414  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:07.821280  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:09.821710  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:07.755386  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:09.759064  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:12.256087  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:11.282449  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:11.295381  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:11.295443  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:11.328119  133241 cri.go:89] found id: ""
	I1210 01:12:11.328145  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.328156  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:11.328162  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:11.328215  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:11.360864  133241 cri.go:89] found id: ""
	I1210 01:12:11.360895  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.360906  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:11.360914  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:11.360979  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:11.394838  133241 cri.go:89] found id: ""
	I1210 01:12:11.394862  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.394871  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:11.394876  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:11.394928  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:11.424174  133241 cri.go:89] found id: ""
	I1210 01:12:11.424216  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.424228  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:11.424236  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:11.424295  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:11.455057  133241 cri.go:89] found id: ""
	I1210 01:12:11.455083  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.455095  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:11.455102  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:11.455173  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:11.485755  133241 cri.go:89] found id: ""
	I1210 01:12:11.485783  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.485791  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:11.485797  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:11.485850  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:11.516921  133241 cri.go:89] found id: ""
	I1210 01:12:11.516952  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.516963  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:11.516970  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:11.517029  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:11.547484  133241 cri.go:89] found id: ""
	I1210 01:12:11.547510  133241 logs.go:282] 0 containers: []
	W1210 01:12:11.547518  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:11.547527  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:11.547540  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:11.582392  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:11.582419  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:11.635271  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:11.635306  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:11.647460  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:11.647492  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:11.713562  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:11.713584  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:11.713599  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:14.299112  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:14.314813  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:14.314886  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:14.365870  133241 cri.go:89] found id: ""
	I1210 01:12:14.365907  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.365925  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:14.365934  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:14.365998  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:14.399023  133241 cri.go:89] found id: ""
	I1210 01:12:14.399046  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.399054  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:14.399060  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:14.399106  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:14.432464  133241 cri.go:89] found id: ""
	I1210 01:12:14.432490  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.432498  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:14.432504  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:14.432559  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:14.462625  133241 cri.go:89] found id: ""
	I1210 01:12:14.462657  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.462668  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:14.462675  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:14.462723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:14.494853  133241 cri.go:89] found id: ""
	I1210 01:12:14.494884  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.494895  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:14.494903  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:14.494968  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:14.528863  133241 cri.go:89] found id: ""
	I1210 01:12:14.528898  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.528909  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:14.528917  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:14.528985  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:14.563527  133241 cri.go:89] found id: ""
	I1210 01:12:14.563557  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.563568  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:14.563575  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:14.563633  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:14.592383  133241 cri.go:89] found id: ""
	I1210 01:12:14.592419  133241 logs.go:282] 0 containers: []
	W1210 01:12:14.592429  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:14.592440  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:14.592453  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:14.604471  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:14.604498  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:12:12.790681  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:15.289761  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:12.321375  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:14.321765  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:16.820568  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:14.256568  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:16.755323  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	W1210 01:12:14.671647  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:14.671673  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:14.671686  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:14.749619  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:14.749648  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:14.783668  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:14.783700  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:17.337203  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:17.349666  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:17.349726  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:17.380558  133241 cri.go:89] found id: ""
	I1210 01:12:17.380584  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.380595  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:17.380603  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:17.380663  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:17.413026  133241 cri.go:89] found id: ""
	I1210 01:12:17.413060  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.413072  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:17.413080  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:17.413149  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:17.444972  133241 cri.go:89] found id: ""
	I1210 01:12:17.445003  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.445014  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:17.445022  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:17.445081  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:17.477555  133241 cri.go:89] found id: ""
	I1210 01:12:17.477580  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.477588  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:17.477594  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:17.477641  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:17.508550  133241 cri.go:89] found id: ""
	I1210 01:12:17.508574  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.508582  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:17.508588  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:17.508671  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:17.538537  133241 cri.go:89] found id: ""
	I1210 01:12:17.538574  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.538586  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:17.538594  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:17.538655  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:17.571816  133241 cri.go:89] found id: ""
	I1210 01:12:17.571843  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.571851  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:17.571859  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:17.571916  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:17.602437  133241 cri.go:89] found id: ""
	I1210 01:12:17.602465  133241 logs.go:282] 0 containers: []
	W1210 01:12:17.602484  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:17.602502  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:17.602517  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:17.652904  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:17.652936  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:17.664983  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:17.665006  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:17.732580  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:17.732606  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:17.732622  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:17.813561  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:17.813598  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:17.290624  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:19.291031  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:21.790058  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:18.821021  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:20.821538  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:18.755611  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:20.756570  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:20.349846  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:20.361680  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:20.361816  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:20.394316  133241 cri.go:89] found id: ""
	I1210 01:12:20.394338  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.394345  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:20.394350  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:20.394395  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:20.432172  133241 cri.go:89] found id: ""
	I1210 01:12:20.432196  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.432204  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:20.432209  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:20.432256  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:20.464019  133241 cri.go:89] found id: ""
	I1210 01:12:20.464042  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.464049  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:20.464055  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:20.464101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:20.496239  133241 cri.go:89] found id: ""
	I1210 01:12:20.496264  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.496271  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:20.496277  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:20.496325  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:20.527890  133241 cri.go:89] found id: ""
	I1210 01:12:20.527920  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.527932  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:20.527939  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:20.527996  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:20.558333  133241 cri.go:89] found id: ""
	I1210 01:12:20.558360  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.558368  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:20.558374  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:20.558425  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:20.589431  133241 cri.go:89] found id: ""
	I1210 01:12:20.589461  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.589472  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:20.589480  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:20.589542  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:20.618988  133241 cri.go:89] found id: ""
	I1210 01:12:20.619018  133241 logs.go:282] 0 containers: []
	W1210 01:12:20.619032  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:20.619042  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:20.619056  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:20.669620  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:20.669648  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:20.681405  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:20.681428  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:20.745196  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:20.745226  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:20.745243  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:20.823522  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:20.823548  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:23.360499  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:23.373249  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:23.373315  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:23.405186  133241 cri.go:89] found id: ""
	I1210 01:12:23.405207  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.405215  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:23.405224  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:23.405269  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:23.440082  133241 cri.go:89] found id: ""
	I1210 01:12:23.440118  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.440138  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:23.440146  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:23.440217  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:23.473962  133241 cri.go:89] found id: ""
	I1210 01:12:23.473991  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.474001  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:23.474010  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:23.474066  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:23.505004  133241 cri.go:89] found id: ""
	I1210 01:12:23.505028  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.505036  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:23.505042  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:23.505087  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:23.539383  133241 cri.go:89] found id: ""
	I1210 01:12:23.539416  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.539427  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:23.539435  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:23.539502  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:23.569371  133241 cri.go:89] found id: ""
	I1210 01:12:23.569402  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.569412  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:23.569420  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:23.569487  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:23.599718  133241 cri.go:89] found id: ""
	I1210 01:12:23.599740  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.599748  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:23.599754  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:23.599798  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:23.633483  133241 cri.go:89] found id: ""
	I1210 01:12:23.633513  133241 logs.go:282] 0 containers: []
	W1210 01:12:23.633527  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:23.633538  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:23.633572  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:23.645791  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:23.645814  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:23.706819  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:23.706842  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:23.706858  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:23.792257  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:23.792283  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:23.832356  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:23.832384  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:23.790991  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:26.289467  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:23.321221  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:25.321373  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:23.256427  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:25.256459  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:27.257652  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:26.383157  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:26.395778  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:26.395834  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:26.428709  133241 cri.go:89] found id: ""
	I1210 01:12:26.428738  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.428750  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:26.428758  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:26.428823  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:26.463421  133241 cri.go:89] found id: ""
	I1210 01:12:26.463451  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.463470  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:26.463479  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:26.463541  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:26.494783  133241 cri.go:89] found id: ""
	I1210 01:12:26.494813  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.494826  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:26.494834  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:26.494894  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:26.524395  133241 cri.go:89] found id: ""
	I1210 01:12:26.524423  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.524434  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:26.524442  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:26.524505  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:26.554102  133241 cri.go:89] found id: ""
	I1210 01:12:26.554135  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.554146  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:26.554153  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:26.554218  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:26.584091  133241 cri.go:89] found id: ""
	I1210 01:12:26.584119  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.584127  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:26.584133  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:26.584188  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:26.618194  133241 cri.go:89] found id: ""
	I1210 01:12:26.618221  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.618229  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:26.618234  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:26.618282  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:26.652597  133241 cri.go:89] found id: ""
	I1210 01:12:26.652632  133241 logs.go:282] 0 containers: []
	W1210 01:12:26.652643  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:26.652657  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:26.652674  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:26.724236  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:26.724262  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:26.724277  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:26.802706  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:26.802745  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:26.851153  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:26.851184  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:26.902459  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:26.902489  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:29.415298  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:29.428093  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:29.428168  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:29.460651  133241 cri.go:89] found id: ""
	I1210 01:12:29.460678  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.460686  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:29.460692  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:29.460745  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:29.490971  133241 cri.go:89] found id: ""
	I1210 01:12:29.491000  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.491009  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:29.491015  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:29.491064  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:29.521465  133241 cri.go:89] found id: ""
	I1210 01:12:29.521496  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.521509  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:29.521517  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:29.521592  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:29.555709  133241 cri.go:89] found id: ""
	I1210 01:12:29.555736  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.555744  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:29.555750  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:29.555812  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:29.589891  133241 cri.go:89] found id: ""
	I1210 01:12:29.589918  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.589928  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:29.589935  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:29.590006  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:29.620929  133241 cri.go:89] found id: ""
	I1210 01:12:29.620959  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.620989  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:29.620998  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:29.621060  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:28.290708  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:30.290750  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:27.822436  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:30.320877  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:29.756698  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:31.756872  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:29.652297  133241 cri.go:89] found id: ""
	I1210 01:12:29.652322  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.652332  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:29.652339  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:29.652400  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:29.685881  133241 cri.go:89] found id: ""
	I1210 01:12:29.685904  133241 logs.go:282] 0 containers: []
	W1210 01:12:29.685912  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:29.685922  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:29.685936  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:29.734856  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:29.734889  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:29.747270  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:29.747297  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:29.811253  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:29.811276  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:29.811292  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:29.888151  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:29.888187  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:32.425743  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:32.438647  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:32.438723  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:32.477466  133241 cri.go:89] found id: ""
	I1210 01:12:32.477489  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.477498  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:32.477503  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:32.477553  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:32.509698  133241 cri.go:89] found id: ""
	I1210 01:12:32.509732  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.509746  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:32.509753  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:32.509811  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:32.540873  133241 cri.go:89] found id: ""
	I1210 01:12:32.540903  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.540911  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:32.540919  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:32.540981  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:32.571143  133241 cri.go:89] found id: ""
	I1210 01:12:32.571168  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.571179  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:32.571186  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:32.571253  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:32.604797  133241 cri.go:89] found id: ""
	I1210 01:12:32.604829  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.604839  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:32.604847  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:32.604902  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:32.640179  133241 cri.go:89] found id: ""
	I1210 01:12:32.640204  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.640212  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:32.640218  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:32.640265  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:32.671103  133241 cri.go:89] found id: ""
	I1210 01:12:32.671130  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.671138  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:32.671144  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:32.671195  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:32.709038  133241 cri.go:89] found id: ""
	I1210 01:12:32.709069  133241 logs.go:282] 0 containers: []
	W1210 01:12:32.709080  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:32.709092  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:32.709107  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:32.764933  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:32.764963  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:32.777149  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:32.777172  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:32.842233  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:32.842256  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:32.842273  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:32.923533  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:32.923569  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:32.291302  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:34.790708  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:32.321782  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:34.821161  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:36.821244  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:34.256937  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:36.756894  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:35.462284  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:35.476392  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:35.476465  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:35.509483  133241 cri.go:89] found id: ""
	I1210 01:12:35.509507  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.509515  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:35.509521  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:35.509568  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:35.546324  133241 cri.go:89] found id: ""
	I1210 01:12:35.546357  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.546369  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:35.546385  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:35.546457  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:35.580578  133241 cri.go:89] found id: ""
	I1210 01:12:35.580608  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.580618  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:35.580626  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:35.580695  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:35.613220  133241 cri.go:89] found id: ""
	I1210 01:12:35.613245  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.613253  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:35.613259  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:35.613318  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:35.650713  133241 cri.go:89] found id: ""
	I1210 01:12:35.650741  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.650751  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:35.650757  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:35.650826  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:35.685084  133241 cri.go:89] found id: ""
	I1210 01:12:35.685121  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.685134  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:35.685141  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:35.685196  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:35.717092  133241 cri.go:89] found id: ""
	I1210 01:12:35.717118  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.717130  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:35.717141  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:35.717197  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:35.753691  133241 cri.go:89] found id: ""
	I1210 01:12:35.753722  133241 logs.go:282] 0 containers: []
	W1210 01:12:35.753732  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:35.753751  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:35.753766  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:35.807280  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:35.807314  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:35.821862  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:35.821894  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:35.892640  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:35.892667  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:35.892684  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:35.967250  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:35.967291  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:38.505643  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:38.518703  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:38.518762  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:38.554866  133241 cri.go:89] found id: ""
	I1210 01:12:38.554904  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.554917  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:38.554926  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:38.554983  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:38.586725  133241 cri.go:89] found id: ""
	I1210 01:12:38.586757  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.586770  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:38.586779  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:38.586840  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:38.617766  133241 cri.go:89] found id: ""
	I1210 01:12:38.617791  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.617799  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:38.617804  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:38.617855  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:38.647743  133241 cri.go:89] found id: ""
	I1210 01:12:38.647770  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.647779  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:38.647785  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:38.647838  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:38.680523  133241 cri.go:89] found id: ""
	I1210 01:12:38.680553  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.680564  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:38.680572  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:38.680634  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:38.714271  133241 cri.go:89] found id: ""
	I1210 01:12:38.714299  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.714307  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:38.714314  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:38.714366  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:38.751180  133241 cri.go:89] found id: ""
	I1210 01:12:38.751213  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.751226  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:38.751235  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:38.751307  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:38.783754  133241 cri.go:89] found id: ""
	I1210 01:12:38.783778  133241 logs.go:282] 0 containers: []
	W1210 01:12:38.783787  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:38.783796  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:38.783807  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:38.843285  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:38.843332  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:38.856901  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:38.856935  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:38.923720  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:38.923747  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:38.923764  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:39.002855  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:39.002898  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:37.290816  132693 pod_ready.go:103] pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:38.785325  132693 pod_ready.go:82] duration metric: took 4m0.000828619s for pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace to be "Ready" ...
	E1210 01:12:38.785348  132693 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-mhxtf" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 01:12:38.785371  132693 pod_ready.go:39] duration metric: took 4m7.530994938s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:12:38.785436  132693 kubeadm.go:597] duration metric: took 4m15.56153133s to restartPrimaryControlPlane
	W1210 01:12:38.785555  132693 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 01:12:38.785612  132693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:12:38.822192  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:41.321407  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:39.256018  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:41.256892  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:41.542152  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:41.556438  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:41.556517  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:41.587666  133241 cri.go:89] found id: ""
	I1210 01:12:41.587695  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.587706  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:41.587714  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:41.587772  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:41.620472  133241 cri.go:89] found id: ""
	I1210 01:12:41.620498  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.620506  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:41.620512  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:41.620568  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:41.653153  133241 cri.go:89] found id: ""
	I1210 01:12:41.653196  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.653209  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:41.653217  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:41.653275  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:41.685358  133241 cri.go:89] found id: ""
	I1210 01:12:41.685387  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.685395  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:41.685401  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:41.685459  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:41.715972  133241 cri.go:89] found id: ""
	I1210 01:12:41.715996  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.716004  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:41.716010  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:41.716058  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:41.750651  133241 cri.go:89] found id: ""
	I1210 01:12:41.750684  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.750695  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:41.750703  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:41.750781  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:41.788845  133241 cri.go:89] found id: ""
	I1210 01:12:41.788872  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.788882  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:41.788890  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:41.788953  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:41.821679  133241 cri.go:89] found id: ""
	I1210 01:12:41.821705  133241 logs.go:282] 0 containers: []
	W1210 01:12:41.821716  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:41.821726  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:41.821741  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:41.873177  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:41.873207  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:41.885639  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:41.885663  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:41.954882  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:41.954906  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:41.954922  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:42.032868  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:42.032911  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:44.569896  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:44.582137  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:12:44.582239  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:12:44.613216  133241 cri.go:89] found id: ""
	I1210 01:12:44.613245  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.613255  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:12:44.613264  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:12:44.613326  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:12:43.820651  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:45.821203  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:43.755681  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:45.755860  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:44.642860  133241 cri.go:89] found id: ""
	I1210 01:12:44.642887  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.642897  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:12:44.642904  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:12:44.642961  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:12:44.675879  133241 cri.go:89] found id: ""
	I1210 01:12:44.675908  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.675920  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:12:44.675928  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:12:44.675992  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:12:44.705466  133241 cri.go:89] found id: ""
	I1210 01:12:44.705490  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.705499  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:12:44.705505  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:12:44.705552  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:12:44.740999  133241 cri.go:89] found id: ""
	I1210 01:12:44.741029  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.741038  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:12:44.741043  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:12:44.741101  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:12:44.774933  133241 cri.go:89] found id: ""
	I1210 01:12:44.774963  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.774974  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:12:44.774981  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:12:44.775044  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:12:44.806061  133241 cri.go:89] found id: ""
	I1210 01:12:44.806085  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.806093  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:12:44.806100  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:12:44.806163  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:12:44.837759  133241 cri.go:89] found id: ""
	I1210 01:12:44.837781  133241 logs.go:282] 0 containers: []
	W1210 01:12:44.837789  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:12:44.837797  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:12:44.837808  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:12:44.872830  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:12:44.872881  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:12:44.925476  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:12:44.925505  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:12:44.937814  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:12:44.937838  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:12:45.012002  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:12:45.012029  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:12:45.012046  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:12:47.589735  133241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:12:47.603668  133241 kubeadm.go:597] duration metric: took 4m3.306612686s to restartPrimaryControlPlane
	W1210 01:12:47.603739  133241 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 01:12:47.603761  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:12:48.154198  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:12:48.167608  133241 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:12:48.176803  133241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:12:48.185508  133241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:12:48.185527  133241 kubeadm.go:157] found existing configuration files:
	
	I1210 01:12:48.185572  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:12:48.193940  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:12:48.193992  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:12:48.202384  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:12:48.210626  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:12:48.210672  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:12:48.219377  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:12:48.227459  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:12:48.227493  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:12:48.235967  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:12:48.244142  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:12:48.244177  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:12:48.252961  133241 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:12:48.323011  133241 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 01:12:48.323104  133241 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:12:48.458259  133241 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:12:48.458424  133241 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:12:48.458536  133241 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 01:12:48.630626  133241 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:12:48.632393  133241 out.go:235]   - Generating certificates and keys ...
	I1210 01:12:48.632510  133241 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:12:48.632611  133241 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:12:48.633714  133241 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:12:48.633800  133241 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:12:48.633862  133241 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:12:48.633957  133241 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:12:48.634058  133241 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:12:48.634151  133241 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:12:48.634265  133241 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:12:48.634426  133241 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:12:48.634546  133241 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:12:48.634640  133241 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:12:48.756866  133241 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:12:48.885589  133241 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:12:49.551602  133241 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:12:49.667812  133241 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:12:49.683125  133241 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:12:49.684322  133241 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:12:49.684390  133241 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:12:49.830086  133241 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:12:48.322646  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:50.821218  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:47.756532  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:49.757416  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:52.256110  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:49.831618  133241 out.go:235]   - Booting up control plane ...
	I1210 01:12:49.831733  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:12:49.836164  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:12:49.837117  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:12:49.845538  133241 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:12:49.848331  133241 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 01:12:53.320607  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:55.321218  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:54.256922  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:56.755279  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:57.321409  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:59.321826  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:01.821159  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:12:58.757281  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:01.256065  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:05.297959  132693 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.512320802s)
	I1210 01:13:05.298031  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:13:05.321593  132693 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:13:05.334072  132693 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:13:05.346063  132693 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:13:05.346089  132693 kubeadm.go:157] found existing configuration files:
	
	I1210 01:13:05.346143  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:13:05.360019  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:13:05.360087  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:13:05.372583  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:13:05.384130  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:13:05.384188  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:13:05.392629  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:13:05.400642  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:13:05.400700  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:13:05.410803  132693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:13:05.419350  132693 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:13:05.419390  132693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:13:05.429452  132693 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:13:05.481014  132693 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 01:13:05.481092  132693 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:13:05.597528  132693 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:13:05.597654  132693 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:13:05.597756  132693 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 01:13:05.612251  132693 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:13:05.613988  132693 out.go:235]   - Generating certificates and keys ...
	I1210 01:13:05.614052  132693 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:13:05.614111  132693 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:13:05.614207  132693 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:13:05.614297  132693 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:13:05.614409  132693 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:13:05.614477  132693 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:13:05.614568  132693 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:13:05.614645  132693 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:13:05.614739  132693 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:13:05.614860  132693 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:13:05.614923  132693 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:13:05.615007  132693 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:13:05.946241  132693 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:13:06.262996  132693 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 01:13:06.492684  132693 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:13:06.618787  132693 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:13:06.805590  132693 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:13:06.806311  132693 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:13:06.808813  132693 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:13:06.810481  132693 out.go:235]   - Booting up control plane ...
	I1210 01:13:06.810631  132693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:13:06.810746  132693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:13:06.810812  132693 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:13:03.821406  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:05.821749  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:03.756325  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:06.257324  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:06.832919  132693 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:13:06.839052  132693 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:13:06.839096  132693 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:13:06.969474  132693 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 01:13:06.969623  132693 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 01:13:07.971413  132693 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001911774s
	I1210 01:13:07.971493  132693 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 01:13:07.822174  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:09.822828  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:12.473566  132693 kubeadm.go:310] [api-check] The API server is healthy after 4.502020736s
	I1210 01:13:12.487877  132693 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 01:13:12.501570  132693 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 01:13:12.529568  132693 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 01:13:12.529808  132693 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-274758 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 01:13:12.539578  132693 kubeadm.go:310] [bootstrap-token] Using token: tq1yzs.mz19z1mkmh869v39
	I1210 01:13:08.757580  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:11.256597  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:12.540687  132693 out.go:235]   - Configuring RBAC rules ...
	I1210 01:13:12.540830  132693 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 01:13:12.546018  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 01:13:12.554335  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 01:13:12.557480  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 01:13:12.562006  132693 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 01:13:12.568058  132693 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 01:13:12.880502  132693 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 01:13:13.367386  132693 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 01:13:13.879413  132693 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 01:13:13.880417  132693 kubeadm.go:310] 
	I1210 01:13:13.880519  132693 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 01:13:13.880541  132693 kubeadm.go:310] 
	I1210 01:13:13.880619  132693 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 01:13:13.880629  132693 kubeadm.go:310] 
	I1210 01:13:13.880662  132693 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 01:13:13.880741  132693 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 01:13:13.880829  132693 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 01:13:13.880851  132693 kubeadm.go:310] 
	I1210 01:13:13.880930  132693 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 01:13:13.880943  132693 kubeadm.go:310] 
	I1210 01:13:13.881016  132693 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 01:13:13.881029  132693 kubeadm.go:310] 
	I1210 01:13:13.881114  132693 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 01:13:13.881255  132693 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 01:13:13.881326  132693 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 01:13:13.881334  132693 kubeadm.go:310] 
	I1210 01:13:13.881429  132693 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 01:13:13.881542  132693 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 01:13:13.881553  132693 kubeadm.go:310] 
	I1210 01:13:13.881680  132693 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tq1yzs.mz19z1mkmh869v39 \
	I1210 01:13:13.881815  132693 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 \
	I1210 01:13:13.881843  132693 kubeadm.go:310] 	--control-plane 
	I1210 01:13:13.881854  132693 kubeadm.go:310] 
	I1210 01:13:13.881973  132693 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 01:13:13.881982  132693 kubeadm.go:310] 
	I1210 01:13:13.882072  132693 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tq1yzs.mz19z1mkmh869v39 \
	I1210 01:13:13.882230  132693 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 
	I1210 01:13:13.883146  132693 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:13:13.883196  132693 cni.go:84] Creating CNI manager for ""
	I1210 01:13:13.883217  132693 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:13:13.885371  132693 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:13:13.886543  132693 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:13:13.897482  132693 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:13:13.915107  132693 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 01:13:13.915244  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:13.915242  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-274758 minikube.k8s.io/updated_at=2024_12_10T01_13_13_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=embed-certs-274758 minikube.k8s.io/primary=true
	I1210 01:13:13.928635  132693 ops.go:34] apiserver oom_adj: -16
	I1210 01:13:14.131983  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:14.633015  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:15.132113  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:15.632347  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:16.132367  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:16.632749  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:12.321479  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:14.321663  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:16.822120  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:13.756549  133282 pod_ready.go:103] pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:15.751204  133282 pod_ready.go:82] duration metric: took 4m0.000700419s for pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace to be "Ready" ...
	E1210 01:13:15.751234  133282 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-zpj2g" in "kube-system" namespace to be "Ready" (will not retry!)
	I1210 01:13:15.751259  133282 pod_ready.go:39] duration metric: took 4m6.019142998s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:15.751290  133282 kubeadm.go:597] duration metric: took 4m13.842336769s to restartPrimaryControlPlane
	W1210 01:13:15.751381  133282 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1210 01:13:15.751413  133282 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:13:17.132359  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:17.632050  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:18.132263  132693 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:18.225462  132693 kubeadm.go:1113] duration metric: took 4.310260508s to wait for elevateKubeSystemPrivileges
	I1210 01:13:18.225504  132693 kubeadm.go:394] duration metric: took 4m55.046897812s to StartCluster
	I1210 01:13:18.225547  132693 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:18.225650  132693 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:13:18.227523  132693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:18.227776  132693 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.76 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 01:13:18.227852  132693 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 01:13:18.227928  132693 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-274758"
	I1210 01:13:18.227962  132693 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-274758"
	I1210 01:13:18.227961  132693 addons.go:69] Setting default-storageclass=true in profile "embed-certs-274758"
	I1210 01:13:18.227999  132693 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-274758"
	I1210 01:13:18.228012  132693 config.go:182] Loaded profile config "embed-certs-274758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1210 01:13:18.227973  132693 addons.go:243] addon storage-provisioner should already be in state true
	I1210 01:13:18.227983  132693 addons.go:69] Setting metrics-server=true in profile "embed-certs-274758"
	I1210 01:13:18.228079  132693 addons.go:234] Setting addon metrics-server=true in "embed-certs-274758"
	W1210 01:13:18.228096  132693 addons.go:243] addon metrics-server should already be in state true
	I1210 01:13:18.228130  132693 host.go:66] Checking if "embed-certs-274758" exists ...
	I1210 01:13:18.228085  132693 host.go:66] Checking if "embed-certs-274758" exists ...
	I1210 01:13:18.228468  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.228508  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.228521  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.228554  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.228608  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.228660  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.229260  132693 out.go:177] * Verifying Kubernetes components...
	I1210 01:13:18.230643  132693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:13:18.244916  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45913
	I1210 01:13:18.245098  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I1210 01:13:18.245389  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.245571  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.246186  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.246210  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.246288  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43885
	I1210 01:13:18.246344  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.246364  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.246598  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.246769  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.246771  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.246825  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.247215  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.247242  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.247367  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.247418  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.247638  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.248206  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.248244  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.250542  132693 addons.go:234] Setting addon default-storageclass=true in "embed-certs-274758"
	W1210 01:13:18.250579  132693 addons.go:243] addon default-storageclass should already be in state true
	I1210 01:13:18.250614  132693 host.go:66] Checking if "embed-certs-274758" exists ...
	I1210 01:13:18.250951  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.250999  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.265194  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36375
	I1210 01:13:18.265779  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43577
	I1210 01:13:18.266283  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.266478  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.267212  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.267234  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.267302  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40845
	I1210 01:13:18.267316  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.267329  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.267647  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.267700  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.268228  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.268248  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.268250  132693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:18.268276  132693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:18.268319  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.268679  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.268889  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.269065  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.271273  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:13:18.271495  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:13:18.272879  132693 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:13:18.272898  132693 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 01:13:18.274238  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 01:13:18.274260  132693 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 01:13:18.274279  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:13:18.274371  132693 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:18.274394  132693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 01:13:18.274411  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:13:18.278685  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.279199  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:13:18.279245  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.279405  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:13:18.279557  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:13:18.279684  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:13:18.279823  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:13:18.280345  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.281064  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:13:18.281083  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:13:18.281095  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.281282  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:13:18.281455  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:13:18.281643  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:13:18.285915  132693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36787
	I1210 01:13:18.286306  132693 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:18.286727  132693 main.go:141] libmachine: Using API Version  1
	I1210 01:13:18.286745  132693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:18.287055  132693 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:18.287234  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetState
	I1210 01:13:18.288732  132693 main.go:141] libmachine: (embed-certs-274758) Calling .DriverName
	I1210 01:13:18.288930  132693 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:18.288945  132693 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 01:13:18.288962  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHHostname
	I1210 01:13:18.291528  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.291801  132693 main.go:141] libmachine: (embed-certs-274758) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:3c:b1", ip: ""} in network mk-embed-certs-274758: {Iface:virbr4 ExpiryTime:2024-12-10 02:08:10 +0000 UTC Type:0 Mac:52:54:00:d3:3c:b1 Iaid: IPaddr:192.168.72.76 Prefix:24 Hostname:embed-certs-274758 Clientid:01:52:54:00:d3:3c:b1}
	I1210 01:13:18.291821  132693 main.go:141] libmachine: (embed-certs-274758) DBG | domain embed-certs-274758 has defined IP address 192.168.72.76 and MAC address 52:54:00:d3:3c:b1 in network mk-embed-certs-274758
	I1210 01:13:18.291990  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHPort
	I1210 01:13:18.292175  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHKeyPath
	I1210 01:13:18.292303  132693 main.go:141] libmachine: (embed-certs-274758) Calling .GetSSHUsername
	I1210 01:13:18.292532  132693 sshutil.go:53] new ssh client: &{IP:192.168.72.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/embed-certs-274758/id_rsa Username:docker}
	I1210 01:13:18.426704  132693 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:13:18.454857  132693 node_ready.go:35] waiting up to 6m0s for node "embed-certs-274758" to be "Ready" ...
	I1210 01:13:18.470552  132693 node_ready.go:49] node "embed-certs-274758" has status "Ready":"True"
	I1210 01:13:18.470590  132693 node_ready.go:38] duration metric: took 15.702625ms for node "embed-certs-274758" to be "Ready" ...
	I1210 01:13:18.470604  132693 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:18.480748  132693 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bgjgh" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:18.569014  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 01:13:18.569040  132693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 01:13:18.605108  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 01:13:18.605137  132693 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 01:13:18.606158  132693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:18.614827  132693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:18.647542  132693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:18.647573  132693 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 01:13:18.726060  132693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:19.536876  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.536905  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.536988  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.537020  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.537177  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537215  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.537223  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.537234  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.537239  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537252  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.537261  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.537269  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.537324  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.537524  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537623  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.537922  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.537957  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.537981  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.556234  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.556255  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.556555  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.556567  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.556572  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.977786  132693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.251679295s)
	I1210 01:13:19.977848  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.977861  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.978210  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.978227  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.978253  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.978288  132693 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:19.978297  132693 main.go:141] libmachine: (embed-certs-274758) Calling .Close
	I1210 01:13:19.978536  132693 main.go:141] libmachine: (embed-certs-274758) DBG | Closing plugin on server side
	I1210 01:13:19.978557  132693 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:19.978581  132693 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:19.978593  132693 addons.go:475] Verifying addon metrics-server=true in "embed-certs-274758"
	I1210 01:13:19.980096  132693 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 01:13:19.981147  132693 addons.go:510] duration metric: took 1.753302974s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 01:13:20.487221  132693 pod_ready.go:93] pod "coredns-7c65d6cfc9-bgjgh" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:20.487244  132693 pod_ready.go:82] duration metric: took 2.006464893s for pod "coredns-7c65d6cfc9-bgjgh" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:20.487253  132693 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:18.822687  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:21.322845  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:22.493358  132693 pod_ready.go:103] pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:24.993203  132693 pod_ready.go:103] pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:25.492646  132693 pod_ready.go:93] pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.492669  132693 pod_ready.go:82] duration metric: took 5.005410057s for pod "coredns-7c65d6cfc9-m4qgb" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.492679  132693 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.497102  132693 pod_ready.go:93] pod "etcd-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.497119  132693 pod_ready.go:82] duration metric: took 4.434391ms for pod "etcd-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.497126  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.501166  132693 pod_ready.go:93] pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.501181  132693 pod_ready.go:82] duration metric: took 4.048875ms for pod "kube-apiserver-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.501189  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.505541  132693 pod_ready.go:93] pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.505565  132693 pod_ready.go:82] duration metric: took 4.369889ms for pod "kube-controller-manager-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.505579  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-v28mz" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.509548  132693 pod_ready.go:93] pod "kube-proxy-v28mz" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:25.509562  132693 pod_ready.go:82] duration metric: took 3.977138ms for pod "kube-proxy-v28mz" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:25.509568  132693 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:23.322966  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:25.820854  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:27.517005  132693 pod_ready.go:93] pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:27.517027  132693 pod_ready.go:82] duration metric: took 2.007452032s for pod "kube-scheduler-embed-certs-274758" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:27.517035  132693 pod_ready.go:39] duration metric: took 9.046411107s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:27.517052  132693 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:13:27.517101  132693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:13:27.531721  132693 api_server.go:72] duration metric: took 9.303907779s to wait for apiserver process to appear ...
	I1210 01:13:27.531750  132693 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:13:27.531768  132693 api_server.go:253] Checking apiserver healthz at https://192.168.72.76:8443/healthz ...
	I1210 01:13:27.536509  132693 api_server.go:279] https://192.168.72.76:8443/healthz returned 200:
	ok
	I1210 01:13:27.537428  132693 api_server.go:141] control plane version: v1.31.2
	I1210 01:13:27.537448  132693 api_server.go:131] duration metric: took 5.691563ms to wait for apiserver health ...
	I1210 01:13:27.537462  132693 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:13:27.693218  132693 system_pods.go:59] 9 kube-system pods found
	I1210 01:13:27.693251  132693 system_pods.go:61] "coredns-7c65d6cfc9-bgjgh" [277d23ef-ff20-414d-beb6-c6982712a423] Running
	I1210 01:13:27.693257  132693 system_pods.go:61] "coredns-7c65d6cfc9-m4qgb" [41253d1b-c010-41e2-9286-e9930025e9ff] Running
	I1210 01:13:27.693265  132693 system_pods.go:61] "etcd-embed-certs-274758" [b0c51f0d-a638-4f4f-b3df-95c7794facc9] Running
	I1210 01:13:27.693269  132693 system_pods.go:61] "kube-apiserver-embed-certs-274758" [ecdc5ff5-4d07-4802-98b8-8382132f7748] Running
	I1210 01:13:27.693273  132693 system_pods.go:61] "kube-controller-manager-embed-certs-274758" [e42fce81-4a93-498b-8897-31387873d181] Running
	I1210 01:13:27.693276  132693 system_pods.go:61] "kube-proxy-v28mz" [5cd47cc1-a085-4e77-850d-dde0c8ed6054] Running
	I1210 01:13:27.693279  132693 system_pods.go:61] "kube-scheduler-embed-certs-274758" [609e1329-cd71-4772-96cc-24b3620c511d] Running
	I1210 01:13:27.693285  132693 system_pods.go:61] "metrics-server-6867b74b74-mcw2c" [a7b75933-124c-4577-b26a-ad1c5c128910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:13:27.693289  132693 system_pods.go:61] "storage-provisioner" [71e4d38f-b0fe-43cf-a844-ba787287fda6] Running
	I1210 01:13:27.693296  132693 system_pods.go:74] duration metric: took 155.828167ms to wait for pod list to return data ...
	I1210 01:13:27.693305  132693 default_sa.go:34] waiting for default service account to be created ...
	I1210 01:13:27.891018  132693 default_sa.go:45] found service account: "default"
	I1210 01:13:27.891046  132693 default_sa.go:55] duration metric: took 197.731166ms for default service account to be created ...
	I1210 01:13:27.891055  132693 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 01:13:28.095967  132693 system_pods.go:86] 9 kube-system pods found
	I1210 01:13:28.095996  132693 system_pods.go:89] "coredns-7c65d6cfc9-bgjgh" [277d23ef-ff20-414d-beb6-c6982712a423] Running
	I1210 01:13:28.096002  132693 system_pods.go:89] "coredns-7c65d6cfc9-m4qgb" [41253d1b-c010-41e2-9286-e9930025e9ff] Running
	I1210 01:13:28.096006  132693 system_pods.go:89] "etcd-embed-certs-274758" [b0c51f0d-a638-4f4f-b3df-95c7794facc9] Running
	I1210 01:13:28.096010  132693 system_pods.go:89] "kube-apiserver-embed-certs-274758" [ecdc5ff5-4d07-4802-98b8-8382132f7748] Running
	I1210 01:13:28.096014  132693 system_pods.go:89] "kube-controller-manager-embed-certs-274758" [e42fce81-4a93-498b-8897-31387873d181] Running
	I1210 01:13:28.096017  132693 system_pods.go:89] "kube-proxy-v28mz" [5cd47cc1-a085-4e77-850d-dde0c8ed6054] Running
	I1210 01:13:28.096021  132693 system_pods.go:89] "kube-scheduler-embed-certs-274758" [609e1329-cd71-4772-96cc-24b3620c511d] Running
	I1210 01:13:28.096027  132693 system_pods.go:89] "metrics-server-6867b74b74-mcw2c" [a7b75933-124c-4577-b26a-ad1c5c128910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:13:28.096031  132693 system_pods.go:89] "storage-provisioner" [71e4d38f-b0fe-43cf-a844-ba787287fda6] Running
	I1210 01:13:28.096039  132693 system_pods.go:126] duration metric: took 204.97831ms to wait for k8s-apps to be running ...
	I1210 01:13:28.096047  132693 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 01:13:28.096091  132693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:13:28.109766  132693 system_svc.go:56] duration metric: took 13.710817ms WaitForService to wait for kubelet
	I1210 01:13:28.109807  132693 kubeadm.go:582] duration metric: took 9.881998931s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:13:28.109831  132693 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:13:28.290402  132693 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:13:28.290444  132693 node_conditions.go:123] node cpu capacity is 2
	I1210 01:13:28.290457  132693 node_conditions.go:105] duration metric: took 180.620817ms to run NodePressure ...
	I1210 01:13:28.290472  132693 start.go:241] waiting for startup goroutines ...
	I1210 01:13:28.290478  132693 start.go:246] waiting for cluster config update ...
	I1210 01:13:28.290489  132693 start.go:255] writing updated cluster config ...
	I1210 01:13:28.290756  132693 ssh_runner.go:195] Run: rm -f paused
	I1210 01:13:28.341573  132693 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 01:13:28.343695  132693 out.go:177] * Done! kubectl is now configured to use "embed-certs-274758" cluster and "default" namespace by default
	I1210 01:13:28.321957  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:30.821091  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:29.849672  133241 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 01:13:29.850163  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:13:29.850412  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:13:33.322460  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:35.822120  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:34.850843  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:13:34.851064  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:13:38.321590  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:40.322421  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:41.903973  133282 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.152536348s)
	I1210 01:13:41.904058  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:13:41.922104  133282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1210 01:13:41.932781  133282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:13:41.949147  133282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:13:41.949169  133282 kubeadm.go:157] found existing configuration files:
	
	I1210 01:13:41.949234  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1210 01:13:41.961475  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:13:41.961531  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:13:41.973790  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1210 01:13:41.985658  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:13:41.985718  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:13:41.996851  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1210 01:13:42.005612  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:13:42.005661  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:13:42.016316  133282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1210 01:13:42.025097  133282 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:13:42.025162  133282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:13:42.035841  133282 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:13:42.204343  133282 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:13:42.820637  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:44.821863  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:46.822010  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:44.851525  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:13:44.851699  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:13:50.610797  133282 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1210 01:13:50.610879  133282 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:13:50.610976  133282 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:13:50.611138  133282 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:13:50.611235  133282 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1210 01:13:50.611363  133282 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:13:50.612870  133282 out.go:235]   - Generating certificates and keys ...
	I1210 01:13:50.612937  133282 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:13:50.612990  133282 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:13:50.613065  133282 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:13:50.613142  133282 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:13:50.613213  133282 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:13:50.613291  133282 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:13:50.613383  133282 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:13:50.613468  133282 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:13:50.613583  133282 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:13:50.613711  133282 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:13:50.613784  133282 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:13:50.613871  133282 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:13:50.613951  133282 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:13:50.614035  133282 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1210 01:13:50.614113  133282 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:13:50.614231  133282 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:13:50.614318  133282 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:13:50.614396  133282 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:13:50.614483  133282 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:13:50.615840  133282 out.go:235]   - Booting up control plane ...
	I1210 01:13:50.615917  133282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:13:50.615985  133282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:13:50.616068  133282 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:13:50.616186  133282 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:13:50.616283  133282 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:13:50.616354  133282 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:13:50.616529  133282 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1210 01:13:50.616677  133282 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1210 01:13:50.616752  133282 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002388771s
	I1210 01:13:50.616858  133282 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1210 01:13:50.616942  133282 kubeadm.go:310] [api-check] The API server is healthy after 4.501731998s
	I1210 01:13:50.617063  133282 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1210 01:13:50.617214  133282 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1210 01:13:50.617302  133282 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1210 01:13:50.617556  133282 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-901295 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1210 01:13:50.617633  133282 kubeadm.go:310] [bootstrap-token] Using token: qm0b8q.vohlzpntqihfsj2x
	I1210 01:13:50.618774  133282 out.go:235]   - Configuring RBAC rules ...
	I1210 01:13:50.618896  133282 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1210 01:13:50.619001  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1210 01:13:50.619167  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1210 01:13:50.619286  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1210 01:13:50.619432  133282 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1210 01:13:50.619563  133282 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1210 01:13:50.619724  133282 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1210 01:13:50.619788  133282 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1210 01:13:50.619855  133282 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1210 01:13:50.619865  133282 kubeadm.go:310] 
	I1210 01:13:50.619958  133282 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1210 01:13:50.619970  133282 kubeadm.go:310] 
	I1210 01:13:50.620071  133282 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1210 01:13:50.620084  133282 kubeadm.go:310] 
	I1210 01:13:50.620133  133282 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1210 01:13:50.620214  133282 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1210 01:13:50.620290  133282 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1210 01:13:50.620299  133282 kubeadm.go:310] 
	I1210 01:13:50.620384  133282 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1210 01:13:50.620393  133282 kubeadm.go:310] 
	I1210 01:13:50.620464  133282 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1210 01:13:50.620480  133282 kubeadm.go:310] 
	I1210 01:13:50.620554  133282 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1210 01:13:50.620656  133282 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1210 01:13:50.620747  133282 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1210 01:13:50.620756  133282 kubeadm.go:310] 
	I1210 01:13:50.620862  133282 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1210 01:13:50.620978  133282 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1210 01:13:50.620994  133282 kubeadm.go:310] 
	I1210 01:13:50.621111  133282 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token qm0b8q.vohlzpntqihfsj2x \
	I1210 01:13:50.621255  133282 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 \
	I1210 01:13:50.621286  133282 kubeadm.go:310] 	--control-plane 
	I1210 01:13:50.621296  133282 kubeadm.go:310] 
	I1210 01:13:50.621365  133282 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1210 01:13:50.621374  133282 kubeadm.go:310] 
	I1210 01:13:50.621448  133282 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token qm0b8q.vohlzpntqihfsj2x \
	I1210 01:13:50.621569  133282 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f58183563ef012ba19490e91b4951ad764e8834fc8f21cd0c4b0e6017b139191 
	I1210 01:13:50.621593  133282 cni.go:84] Creating CNI manager for ""
	I1210 01:13:50.621608  133282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1210 01:13:50.622943  133282 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1210 01:13:49.321854  132605 pod_ready.go:103] pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:51.815742  132605 pod_ready.go:82] duration metric: took 4m0.000382174s for pod "metrics-server-6867b74b74-lwgxd" in "kube-system" namespace to be "Ready" ...
	E1210 01:13:51.815774  132605 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1210 01:13:51.815787  132605 pod_ready.go:39] duration metric: took 4m2.800798949s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:51.815811  132605 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:13:51.815854  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:13:51.815920  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:13:51.865972  132605 cri.go:89] found id: "0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:51.866004  132605 cri.go:89] found id: ""
	I1210 01:13:51.866015  132605 logs.go:282] 1 containers: [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c]
	I1210 01:13:51.866098  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.871589  132605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:13:51.871648  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:13:51.909231  132605 cri.go:89] found id: "bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:51.909256  132605 cri.go:89] found id: ""
	I1210 01:13:51.909266  132605 logs.go:282] 1 containers: [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490]
	I1210 01:13:51.909321  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.913562  132605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:13:51.913639  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:13:51.946623  132605 cri.go:89] found id: "7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:51.946651  132605 cri.go:89] found id: ""
	I1210 01:13:51.946661  132605 logs.go:282] 1 containers: [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04]
	I1210 01:13:51.946721  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.950686  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:13:51.950756  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:13:51.988821  132605 cri.go:89] found id: "c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:51.988845  132605 cri.go:89] found id: ""
	I1210 01:13:51.988856  132605 logs.go:282] 1 containers: [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692]
	I1210 01:13:51.988916  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:51.992776  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:13:51.992827  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:13:52.028882  132605 cri.go:89] found id: "eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:52.028910  132605 cri.go:89] found id: ""
	I1210 01:13:52.028920  132605 logs.go:282] 1 containers: [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77]
	I1210 01:13:52.028974  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.033384  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:13:52.033467  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:13:52.068002  132605 cri.go:89] found id: "7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:52.068030  132605 cri.go:89] found id: ""
	I1210 01:13:52.068038  132605 logs.go:282] 1 containers: [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f]
	I1210 01:13:52.068086  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.071868  132605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:13:52.071938  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:13:52.105726  132605 cri.go:89] found id: ""
	I1210 01:13:52.105751  132605 logs.go:282] 0 containers: []
	W1210 01:13:52.105760  132605 logs.go:284] No container was found matching "kindnet"
	I1210 01:13:52.105767  132605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 01:13:52.105822  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 01:13:52.146662  132605 cri.go:89] found id: "8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:52.146690  132605 cri.go:89] found id: "abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:52.146696  132605 cri.go:89] found id: ""
	I1210 01:13:52.146706  132605 logs.go:282] 2 containers: [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0]
	I1210 01:13:52.146769  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.150459  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:52.153921  132605 logs.go:123] Gathering logs for kube-proxy [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77] ...
	I1210 01:13:52.153942  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:52.197327  132605 logs.go:123] Gathering logs for kube-controller-manager [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f] ...
	I1210 01:13:52.197354  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:50.624049  133282 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1210 01:13:50.634300  133282 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1210 01:13:50.650835  133282 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1210 01:13:50.650955  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:50.650957  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-901295 minikube.k8s.io/updated_at=2024_12_10T01_13_50_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=default-k8s-diff-port-901295 minikube.k8s.io/primary=true
	I1210 01:13:50.661855  133282 ops.go:34] apiserver oom_adj: -16
	I1210 01:13:50.846244  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:51.347288  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:51.846690  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:52.346721  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:52.846891  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:53.346360  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:53.846284  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:54.346480  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:54.846394  133282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1210 01:13:54.950848  133282 kubeadm.go:1113] duration metric: took 4.299939675s to wait for elevateKubeSystemPrivileges
	I1210 01:13:54.950893  133282 kubeadm.go:394] duration metric: took 4m53.095365109s to StartCluster
	I1210 01:13:54.950920  133282 settings.go:142] acquiring lock: {Name:mk88816be2bf0f4af316b9ff0729ad510622a167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:54.951018  133282 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 01:13:54.952642  133282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-79135/kubeconfig: {Name:mk96636c23a19a259f8287ed7b7bb5d3d5bf71ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1210 01:13:54.952903  133282 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.193 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1210 01:13:54.953028  133282 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1210 01:13:54.953103  133282 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-901295"
	I1210 01:13:54.953122  133282 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-901295"
	W1210 01:13:54.953130  133282 addons.go:243] addon storage-provisioner should already be in state true
	I1210 01:13:54.953144  133282 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-901295"
	I1210 01:13:54.953165  133282 host.go:66] Checking if "default-k8s-diff-port-901295" exists ...
	I1210 01:13:54.953165  133282 config.go:182] Loaded profile config "default-k8s-diff-port-901295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 01:13:54.953164  133282 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-901295"
	I1210 01:13:54.953175  133282 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-901295"
	I1210 01:13:54.953188  133282 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-901295"
	W1210 01:13:54.953197  133282 addons.go:243] addon metrics-server should already be in state true
	I1210 01:13:54.953236  133282 host.go:66] Checking if "default-k8s-diff-port-901295" exists ...
	I1210 01:13:54.953502  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.953544  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.953604  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.953648  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.953611  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.953720  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.954470  133282 out.go:177] * Verifying Kubernetes components...
	I1210 01:13:54.955825  133282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1210 01:13:54.969471  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43591
	I1210 01:13:54.969539  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42601
	I1210 01:13:54.969905  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.969971  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.970407  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.970427  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.970539  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.970606  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.970834  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.970902  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.971282  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.971314  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.971457  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.971503  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.971615  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33149
	I1210 01:13:54.971975  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.972424  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.972451  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.972757  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.972939  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:54.976290  133282 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-901295"
	W1210 01:13:54.976313  133282 addons.go:243] addon default-storageclass should already be in state true
	I1210 01:13:54.976344  133282 host.go:66] Checking if "default-k8s-diff-port-901295" exists ...
	I1210 01:13:54.976701  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.976743  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.987931  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35367
	I1210 01:13:54.988409  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.988950  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.988975  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.989395  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.989602  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:54.990179  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35279
	I1210 01:13:54.990660  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.991231  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.991256  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.991553  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:13:54.991804  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.991988  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:54.993375  133282 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1210 01:13:54.993895  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:13:54.993895  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38115
	I1210 01:13:54.994363  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:54.994661  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1210 01:13:54.994675  133282 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1210 01:13:54.994690  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:13:54.994864  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:54.994882  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:54.995298  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:54.995379  133282 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1210 01:13:54.995834  133282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 01:13:54.995881  133282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 01:13:54.996682  133282 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:54.996704  133282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1210 01:13:54.996721  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:13:55.000015  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000319  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:13:55.000343  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000361  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000321  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:13:55.000540  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:13:55.000637  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:13:55.000658  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.000689  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:13:55.000819  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:13:55.000955  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:13:55.001529  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:13:55.001896  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:13:55.002167  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:13:55.013310  133282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45905
	I1210 01:13:55.013700  133282 main.go:141] libmachine: () Calling .GetVersion
	I1210 01:13:55.014199  133282 main.go:141] libmachine: Using API Version  1
	I1210 01:13:55.014219  133282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 01:13:55.014556  133282 main.go:141] libmachine: () Calling .GetMachineName
	I1210 01:13:55.014997  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetState
	I1210 01:13:55.016445  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .DriverName
	I1210 01:13:55.016626  133282 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:55.016642  133282 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1210 01:13:55.016659  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHHostname
	I1210 01:13:55.018941  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.019337  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:2f:3d", ip: ""} in network mk-default-k8s-diff-port-901295: {Iface:virbr1 ExpiryTime:2024-12-10 02:08:48 +0000 UTC Type:0 Mac:52:54:00:f7:2f:3d Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:default-k8s-diff-port-901295 Clientid:01:52:54:00:f7:2f:3d}
	I1210 01:13:55.019358  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | domain default-k8s-diff-port-901295 has defined IP address 192.168.39.193 and MAC address 52:54:00:f7:2f:3d in network mk-default-k8s-diff-port-901295
	I1210 01:13:55.019578  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHPort
	I1210 01:13:55.019718  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHKeyPath
	I1210 01:13:55.019807  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .GetSSHUsername
	I1210 01:13:55.019887  133282 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/default-k8s-diff-port-901295/id_rsa Username:docker}
	I1210 01:13:55.152197  133282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1210 01:13:55.175962  133282 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-901295" to be "Ready" ...
	I1210 01:13:55.185748  133282 node_ready.go:49] node "default-k8s-diff-port-901295" has status "Ready":"True"
	I1210 01:13:55.185767  133282 node_ready.go:38] duration metric: took 9.765238ms for node "default-k8s-diff-port-901295" to be "Ready" ...
	I1210 01:13:55.185776  133282 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:13:55.193102  133282 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:55.268186  133282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1210 01:13:55.294420  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1210 01:13:55.294451  133282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1210 01:13:55.326324  133282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1210 01:13:55.338979  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1210 01:13:55.339009  133282 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1210 01:13:55.393682  133282 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:55.393713  133282 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1210 01:13:55.482637  133282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1210 01:13:56.131482  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.131574  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.131524  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.131650  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.132095  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Closing plugin on server side
	I1210 01:13:56.132112  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132129  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.132133  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132138  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.132140  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.132148  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.132149  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.132207  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.132384  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132397  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.132501  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Closing plugin on server side
	I1210 01:13:56.132565  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.132579  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.155188  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.155211  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.155515  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.155535  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.795811  133282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.313113399s)
	I1210 01:13:56.795879  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.795895  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.796326  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) DBG | Closing plugin on server side
	I1210 01:13:56.796327  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.796353  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.796367  133282 main.go:141] libmachine: Making call to close driver server
	I1210 01:13:56.796379  133282 main.go:141] libmachine: (default-k8s-diff-port-901295) Calling .Close
	I1210 01:13:56.796612  133282 main.go:141] libmachine: Successfully made call to close driver server
	I1210 01:13:56.796628  133282 main.go:141] libmachine: Making call to close connection to plugin binary
	I1210 01:13:56.796641  133282 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-901295"
	I1210 01:13:56.798189  133282 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1210 01:13:52.256305  132605 logs.go:123] Gathering logs for dmesg ...
	I1210 01:13:52.256333  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:13:52.269263  132605 logs.go:123] Gathering logs for kube-apiserver [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c] ...
	I1210 01:13:52.269288  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:52.310821  132605 logs.go:123] Gathering logs for coredns [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04] ...
	I1210 01:13:52.310855  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:52.348176  132605 logs.go:123] Gathering logs for kube-scheduler [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692] ...
	I1210 01:13:52.348204  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:52.399357  132605 logs.go:123] Gathering logs for storage-provisioner [abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0] ...
	I1210 01:13:52.399392  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:52.436240  132605 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:13:52.436272  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:13:52.962153  132605 logs.go:123] Gathering logs for container status ...
	I1210 01:13:52.962192  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:13:53.010091  132605 logs.go:123] Gathering logs for kubelet ...
	I1210 01:13:53.010127  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:13:53.082183  132605 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:13:53.082218  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:13:53.201521  132605 logs.go:123] Gathering logs for etcd [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490] ...
	I1210 01:13:53.201557  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:53.243675  132605 logs.go:123] Gathering logs for storage-provisioner [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6] ...
	I1210 01:13:53.243711  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:55.779907  132605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:13:55.796284  132605 api_server.go:72] duration metric: took 4m14.500959712s to wait for apiserver process to appear ...
	I1210 01:13:55.796314  132605 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:13:55.796358  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:13:55.796431  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:13:55.839067  132605 cri.go:89] found id: "0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:55.839098  132605 cri.go:89] found id: ""
	I1210 01:13:55.839107  132605 logs.go:282] 1 containers: [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c]
	I1210 01:13:55.839175  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.843310  132605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:13:55.843382  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:13:55.875863  132605 cri.go:89] found id: "bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:55.875888  132605 cri.go:89] found id: ""
	I1210 01:13:55.875896  132605 logs.go:282] 1 containers: [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490]
	I1210 01:13:55.875960  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.879748  132605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:13:55.879819  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:13:55.911243  132605 cri.go:89] found id: "7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:55.911269  132605 cri.go:89] found id: ""
	I1210 01:13:55.911279  132605 logs.go:282] 1 containers: [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04]
	I1210 01:13:55.911342  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.915201  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:13:55.915268  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:13:55.966280  132605 cri.go:89] found id: "c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:55.966308  132605 cri.go:89] found id: ""
	I1210 01:13:55.966318  132605 logs.go:282] 1 containers: [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692]
	I1210 01:13:55.966384  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:55.970278  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:13:55.970354  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:13:56.004675  132605 cri.go:89] found id: "eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:56.004706  132605 cri.go:89] found id: ""
	I1210 01:13:56.004722  132605 logs.go:282] 1 containers: [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77]
	I1210 01:13:56.004785  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.008534  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:13:56.008614  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:13:56.051252  132605 cri.go:89] found id: "7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:56.051282  132605 cri.go:89] found id: ""
	I1210 01:13:56.051293  132605 logs.go:282] 1 containers: [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f]
	I1210 01:13:56.051356  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.055160  132605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:13:56.055243  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:13:56.100629  132605 cri.go:89] found id: ""
	I1210 01:13:56.100660  132605 logs.go:282] 0 containers: []
	W1210 01:13:56.100672  132605 logs.go:284] No container was found matching "kindnet"
	I1210 01:13:56.100681  132605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 01:13:56.100749  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 01:13:56.140250  132605 cri.go:89] found id: "8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:56.140274  132605 cri.go:89] found id: "abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:56.140280  132605 cri.go:89] found id: ""
	I1210 01:13:56.140290  132605 logs.go:282] 2 containers: [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0]
	I1210 01:13:56.140352  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.145225  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:56.150128  132605 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:13:56.150151  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:13:56.273696  132605 logs.go:123] Gathering logs for kube-apiserver [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c] ...
	I1210 01:13:56.273730  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:56.323851  132605 logs.go:123] Gathering logs for etcd [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490] ...
	I1210 01:13:56.323884  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:56.375726  132605 logs.go:123] Gathering logs for kube-scheduler [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692] ...
	I1210 01:13:56.375763  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:56.430544  132605 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:13:56.430587  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:13:56.866412  132605 logs.go:123] Gathering logs for storage-provisioner [abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0] ...
	I1210 01:13:56.866505  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:56.901321  132605 logs.go:123] Gathering logs for container status ...
	I1210 01:13:56.901360  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:13:56.940068  132605 logs.go:123] Gathering logs for kubelet ...
	I1210 01:13:56.940107  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:13:57.010688  132605 logs.go:123] Gathering logs for dmesg ...
	I1210 01:13:57.010725  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:13:57.025463  132605 logs.go:123] Gathering logs for coredns [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04] ...
	I1210 01:13:57.025514  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:57.063908  132605 logs.go:123] Gathering logs for kube-proxy [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77] ...
	I1210 01:13:57.063939  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:57.102140  132605 logs.go:123] Gathering logs for kube-controller-manager [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f] ...
	I1210 01:13:57.102182  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:57.154429  132605 logs.go:123] Gathering logs for storage-provisioner [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6] ...
	I1210 01:13:57.154467  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:13:56.799397  133282 addons.go:510] duration metric: took 1.846376359s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1210 01:13:57.200860  133282 pod_ready.go:103] pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:13:59.697834  132605 api_server.go:253] Checking apiserver healthz at https://192.168.50.169:8443/healthz ...
	I1210 01:13:59.702097  132605 api_server.go:279] https://192.168.50.169:8443/healthz returned 200:
	ok
	I1210 01:13:59.703338  132605 api_server.go:141] control plane version: v1.31.2
	I1210 01:13:59.703360  132605 api_server.go:131] duration metric: took 3.907039005s to wait for apiserver health ...
	I1210 01:13:59.703368  132605 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:13:59.703389  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:13:59.703430  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:13:59.746795  132605 cri.go:89] found id: "0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:13:59.746815  132605 cri.go:89] found id: ""
	I1210 01:13:59.746822  132605 logs.go:282] 1 containers: [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c]
	I1210 01:13:59.746867  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.750673  132605 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:13:59.750736  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:13:59.783121  132605 cri.go:89] found id: "bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:13:59.783154  132605 cri.go:89] found id: ""
	I1210 01:13:59.783163  132605 logs.go:282] 1 containers: [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490]
	I1210 01:13:59.783210  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.786822  132605 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:13:59.786875  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:13:59.819075  132605 cri.go:89] found id: "7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:13:59.819096  132605 cri.go:89] found id: ""
	I1210 01:13:59.819103  132605 logs.go:282] 1 containers: [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04]
	I1210 01:13:59.819163  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.822836  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:13:59.822886  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:13:59.859388  132605 cri.go:89] found id: "c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:13:59.859418  132605 cri.go:89] found id: ""
	I1210 01:13:59.859428  132605 logs.go:282] 1 containers: [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692]
	I1210 01:13:59.859482  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.863388  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:13:59.863447  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:13:59.897967  132605 cri.go:89] found id: "eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:13:59.897987  132605 cri.go:89] found id: ""
	I1210 01:13:59.897994  132605 logs.go:282] 1 containers: [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77]
	I1210 01:13:59.898037  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.902198  132605 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:13:59.902262  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:13:59.935685  132605 cri.go:89] found id: "7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:13:59.935713  132605 cri.go:89] found id: ""
	I1210 01:13:59.935724  132605 logs.go:282] 1 containers: [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f]
	I1210 01:13:59.935782  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:13:59.939600  132605 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:13:59.939653  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:13:59.975763  132605 cri.go:89] found id: ""
	I1210 01:13:59.975797  132605 logs.go:282] 0 containers: []
	W1210 01:13:59.975810  132605 logs.go:284] No container was found matching "kindnet"
	I1210 01:13:59.975819  132605 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1210 01:13:59.975886  132605 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1210 01:14:00.014470  132605 cri.go:89] found id: "8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:14:00.014500  132605 cri.go:89] found id: "abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:14:00.014506  132605 cri.go:89] found id: ""
	I1210 01:14:00.014515  132605 logs.go:282] 2 containers: [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0]
	I1210 01:14:00.014589  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:14:00.018470  132605 ssh_runner.go:195] Run: which crictl
	I1210 01:14:00.022628  132605 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:14:00.022650  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1210 01:14:00.126253  132605 logs.go:123] Gathering logs for kube-apiserver [0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c] ...
	I1210 01:14:00.126280  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0e94f76a9953499d676d481de76da88721daf7b39abcd5f3d3a54ae75e76b83c"
	I1210 01:14:00.168377  132605 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:14:00.168410  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:14:00.554305  132605 logs.go:123] Gathering logs for container status ...
	I1210 01:14:00.554349  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1210 01:14:00.597646  132605 logs.go:123] Gathering logs for kube-scheduler [c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692] ...
	I1210 01:14:00.597673  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c3cf60e1de63952dfc040515ecd36721ad3c54f31b5948ec5ee72989392692"
	I1210 01:14:00.638356  132605 logs.go:123] Gathering logs for kube-proxy [eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77] ...
	I1210 01:14:00.638385  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eef419f8befc604d92707a690f50dbe506932d9d607713cb1a6584067bb71b77"
	I1210 01:14:00.673027  132605 logs.go:123] Gathering logs for kube-controller-manager [7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f] ...
	I1210 01:14:00.673058  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7147c6004e0661355b359a850385a037d33b40aa6cc03eb80ed08125ef252a5f"
	I1210 01:14:00.736632  132605 logs.go:123] Gathering logs for storage-provisioner [8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6] ...
	I1210 01:14:00.736667  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ccea68bfe8c4e01634fd2920750280ca5516fea8e2291e3a90d259370ceaab6"
	I1210 01:14:00.771609  132605 logs.go:123] Gathering logs for kubelet ...
	I1210 01:14:00.771643  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:14:00.838511  132605 logs.go:123] Gathering logs for dmesg ...
	I1210 01:14:00.838542  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:14:00.853873  132605 logs.go:123] Gathering logs for etcd [bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490] ...
	I1210 01:14:00.853901  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bad358581c44df861909a044a9734756ebb485f46ef927b85c1dbd2fc179c490"
	I1210 01:14:00.903386  132605 logs.go:123] Gathering logs for coredns [7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04] ...
	I1210 01:14:00.903417  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d559bbd79cd259f1e3d3e9ee8acae13b562aff876568a1840984c11bcc5ac04"
	I1210 01:14:00.940479  132605 logs.go:123] Gathering logs for storage-provisioner [abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0] ...
	I1210 01:14:00.940538  132605 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abb7462dd698bc9a7f0efeea78f8ec1ead40f83e931cefb9576f7b03acc5e4d0"
	I1210 01:13:59.199815  133282 pod_ready.go:93] pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:13:59.199838  133282 pod_ready.go:82] duration metric: took 4.006706604s for pod "etcd-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:13:59.199848  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:01.206809  133282 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:14:02.205417  133282 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:02.205439  133282 pod_ready.go:82] duration metric: took 3.005584799s for pod "kube-apiserver-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:02.205449  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:03.479747  132605 system_pods.go:59] 8 kube-system pods found
	I1210 01:14:03.479776  132605 system_pods.go:61] "coredns-7c65d6cfc9-hhsm5" [dddb227a-7c16-4acd-be5f-1ab38b78129c] Running
	I1210 01:14:03.479781  132605 system_pods.go:61] "etcd-no-preload-584179" [acba2a13-196f-4ff9-8151-6b88578d532d] Running
	I1210 01:14:03.479785  132605 system_pods.go:61] "kube-apiserver-no-preload-584179" [20a67076-4ff2-4f31-b245-bf4079cd11d1] Running
	I1210 01:14:03.479789  132605 system_pods.go:61] "kube-controller-manager-no-preload-584179" [d7a2e531-1609-4c8a-b756-d9819339ed27] Running
	I1210 01:14:03.479791  132605 system_pods.go:61] "kube-proxy-xcjs2" [ec6cf5b1-3ea9-4868-874d-61e262cca0c5] Running
	I1210 01:14:03.479795  132605 system_pods.go:61] "kube-scheduler-no-preload-584179" [998543da-3056-4960-8762-9ab3dbd1925a] Running
	I1210 01:14:03.479800  132605 system_pods.go:61] "metrics-server-6867b74b74-lwgxd" [0e7f1063-8508-4f5b-b8ff-bbd387a53919] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:03.479804  132605 system_pods.go:61] "storage-provisioner" [31180637-f48e-4dda-8ec3-56155bb300cf] Running
	I1210 01:14:03.479813  132605 system_pods.go:74] duration metric: took 3.776438741s to wait for pod list to return data ...
	I1210 01:14:03.479820  132605 default_sa.go:34] waiting for default service account to be created ...
	I1210 01:14:03.482188  132605 default_sa.go:45] found service account: "default"
	I1210 01:14:03.482210  132605 default_sa.go:55] duration metric: took 2.383945ms for default service account to be created ...
	I1210 01:14:03.482218  132605 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 01:14:03.487172  132605 system_pods.go:86] 8 kube-system pods found
	I1210 01:14:03.487199  132605 system_pods.go:89] "coredns-7c65d6cfc9-hhsm5" [dddb227a-7c16-4acd-be5f-1ab38b78129c] Running
	I1210 01:14:03.487213  132605 system_pods.go:89] "etcd-no-preload-584179" [acba2a13-196f-4ff9-8151-6b88578d532d] Running
	I1210 01:14:03.487220  132605 system_pods.go:89] "kube-apiserver-no-preload-584179" [20a67076-4ff2-4f31-b245-bf4079cd11d1] Running
	I1210 01:14:03.487227  132605 system_pods.go:89] "kube-controller-manager-no-preload-584179" [d7a2e531-1609-4c8a-b756-d9819339ed27] Running
	I1210 01:14:03.487232  132605 system_pods.go:89] "kube-proxy-xcjs2" [ec6cf5b1-3ea9-4868-874d-61e262cca0c5] Running
	I1210 01:14:03.487239  132605 system_pods.go:89] "kube-scheduler-no-preload-584179" [998543da-3056-4960-8762-9ab3dbd1925a] Running
	I1210 01:14:03.487248  132605 system_pods.go:89] "metrics-server-6867b74b74-lwgxd" [0e7f1063-8508-4f5b-b8ff-bbd387a53919] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:03.487257  132605 system_pods.go:89] "storage-provisioner" [31180637-f48e-4dda-8ec3-56155bb300cf] Running
	I1210 01:14:03.487267  132605 system_pods.go:126] duration metric: took 5.043223ms to wait for k8s-apps to be running ...
	I1210 01:14:03.487278  132605 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 01:14:03.487331  132605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:14:03.503494  132605 system_svc.go:56] duration metric: took 16.208072ms WaitForService to wait for kubelet
	I1210 01:14:03.503520  132605 kubeadm.go:582] duration metric: took 4m22.208203921s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:14:03.503535  132605 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:14:03.506148  132605 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:14:03.506168  132605 node_conditions.go:123] node cpu capacity is 2
	I1210 01:14:03.506181  132605 node_conditions.go:105] duration metric: took 2.641093ms to run NodePressure ...
	I1210 01:14:03.506196  132605 start.go:241] waiting for startup goroutines ...
	I1210 01:14:03.506209  132605 start.go:246] waiting for cluster config update ...
	I1210 01:14:03.506228  132605 start.go:255] writing updated cluster config ...
	I1210 01:14:03.506542  132605 ssh_runner.go:195] Run: rm -f paused
	I1210 01:14:03.552082  132605 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 01:14:03.553885  132605 out.go:177] * Done! kubectl is now configured to use "no-preload-584179" cluster and "default" namespace by default
	I1210 01:14:04.212381  133282 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"False"
	I1210 01:14:05.212520  133282 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:05.212542  133282 pod_ready.go:82] duration metric: took 3.007086471s for pod "kube-controller-manager-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.212551  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mcrmk" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.218010  133282 pod_ready.go:93] pod "kube-proxy-mcrmk" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:05.218032  133282 pod_ready.go:82] duration metric: took 5.474042ms for pod "kube-proxy-mcrmk" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.218043  133282 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.226656  133282 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace has status "Ready":"True"
	I1210 01:14:05.226677  133282 pod_ready.go:82] duration metric: took 8.62491ms for pod "kube-scheduler-default-k8s-diff-port-901295" in "kube-system" namespace to be "Ready" ...
	I1210 01:14:05.226685  133282 pod_ready.go:39] duration metric: took 10.040900009s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1210 01:14:05.226701  133282 api_server.go:52] waiting for apiserver process to appear ...
	I1210 01:14:05.226760  133282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 01:14:05.245203  133282 api_server.go:72] duration metric: took 10.292259038s to wait for apiserver process to appear ...
	I1210 01:14:05.245225  133282 api_server.go:88] waiting for apiserver healthz status ...
	I1210 01:14:05.245246  133282 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8444/healthz ...
	I1210 01:14:05.249103  133282 api_server.go:279] https://192.168.39.193:8444/healthz returned 200:
	ok
	I1210 01:14:05.250169  133282 api_server.go:141] control plane version: v1.31.2
	I1210 01:14:05.250186  133282 api_server.go:131] duration metric: took 4.954164ms to wait for apiserver health ...
	I1210 01:14:05.250191  133282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1210 01:14:05.256313  133282 system_pods.go:59] 9 kube-system pods found
	I1210 01:14:05.256338  133282 system_pods.go:61] "coredns-7c65d6cfc9-4snjr" [ee9574b0-7c13-4fd0-b268-47bef0687b7c] Running
	I1210 01:14:05.256343  133282 system_pods.go:61] "coredns-7c65d6cfc9-wr22x" [51e6d58d-7a5a-4739-94de-c53a8c8247ca] Running
	I1210 01:14:05.256347  133282 system_pods.go:61] "etcd-default-k8s-diff-port-901295" [3e38ffc7-a02d-4449-a166-29eca58e8545] Running
	I1210 01:14:05.256351  133282 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-901295" [868a8472-8c93-4376-98c9-4d2bada6bfc0] Running
	I1210 01:14:05.256355  133282 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-901295" [730047f5-2dbe-4a89-858a-33e2cc0f52eb] Running
	I1210 01:14:05.256358  133282 system_pods.go:61] "kube-proxy-mcrmk" [ffc0f612-5484-46b4-9515-41e0a981287f] Running
	I1210 01:14:05.256361  133282 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-901295" [f5ae0336-bb5d-47ef-94f4-bfd4674adc8e] Running
	I1210 01:14:05.256366  133282 system_pods.go:61] "metrics-server-6867b74b74-rlg4g" [9aae955e-136b-4dbb-a5a5-f7490309bf4e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:05.256376  133282 system_pods.go:61] "storage-provisioner" [06a31677-c5d7-4380-80d3-ec80b787f570] Running
	I1210 01:14:05.256383  133282 system_pods.go:74] duration metric: took 6.186387ms to wait for pod list to return data ...
	I1210 01:14:05.256391  133282 default_sa.go:34] waiting for default service account to be created ...
	I1210 01:14:05.258701  133282 default_sa.go:45] found service account: "default"
	I1210 01:14:05.258720  133282 default_sa.go:55] duration metric: took 2.322746ms for default service account to be created ...
	I1210 01:14:05.258726  133282 system_pods.go:116] waiting for k8s-apps to be running ...
	I1210 01:14:05.262756  133282 system_pods.go:86] 9 kube-system pods found
	I1210 01:14:05.262776  133282 system_pods.go:89] "coredns-7c65d6cfc9-4snjr" [ee9574b0-7c13-4fd0-b268-47bef0687b7c] Running
	I1210 01:14:05.262781  133282 system_pods.go:89] "coredns-7c65d6cfc9-wr22x" [51e6d58d-7a5a-4739-94de-c53a8c8247ca] Running
	I1210 01:14:05.262785  133282 system_pods.go:89] "etcd-default-k8s-diff-port-901295" [3e38ffc7-a02d-4449-a166-29eca58e8545] Running
	I1210 01:14:05.262791  133282 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-901295" [868a8472-8c93-4376-98c9-4d2bada6bfc0] Running
	I1210 01:14:05.262795  133282 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-901295" [730047f5-2dbe-4a89-858a-33e2cc0f52eb] Running
	I1210 01:14:05.262799  133282 system_pods.go:89] "kube-proxy-mcrmk" [ffc0f612-5484-46b4-9515-41e0a981287f] Running
	I1210 01:14:05.262802  133282 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-901295" [f5ae0336-bb5d-47ef-94f4-bfd4674adc8e] Running
	I1210 01:14:05.262808  133282 system_pods.go:89] "metrics-server-6867b74b74-rlg4g" [9aae955e-136b-4dbb-a5a5-f7490309bf4e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1210 01:14:05.262812  133282 system_pods.go:89] "storage-provisioner" [06a31677-c5d7-4380-80d3-ec80b787f570] Running
	I1210 01:14:05.262821  133282 system_pods.go:126] duration metric: took 4.090244ms to wait for k8s-apps to be running ...
	I1210 01:14:05.262827  133282 system_svc.go:44] waiting for kubelet service to be running ....
	I1210 01:14:05.262881  133282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:14:05.275937  133282 system_svc.go:56] duration metric: took 13.102664ms WaitForService to wait for kubelet
	I1210 01:14:05.275962  133282 kubeadm.go:582] duration metric: took 10.323025026s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1210 01:14:05.275984  133282 node_conditions.go:102] verifying NodePressure condition ...
	I1210 01:14:05.278184  133282 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1210 01:14:05.278204  133282 node_conditions.go:123] node cpu capacity is 2
	I1210 01:14:05.278217  133282 node_conditions.go:105] duration metric: took 2.226803ms to run NodePressure ...
	I1210 01:14:05.278230  133282 start.go:241] waiting for startup goroutines ...
	I1210 01:14:05.278239  133282 start.go:246] waiting for cluster config update ...
	I1210 01:14:05.278249  133282 start.go:255] writing updated cluster config ...
	I1210 01:14:05.278553  133282 ssh_runner.go:195] Run: rm -f paused
	I1210 01:14:05.326078  133282 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1210 01:14:05.327902  133282 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-901295" cluster and "default" namespace by default
	I1210 01:14:04.852302  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:14:04.852558  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:14:44.854749  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:14:44.854980  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:14:44.854992  133241 kubeadm.go:310] 
	I1210 01:14:44.855044  133241 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 01:14:44.855104  133241 kubeadm.go:310] 		timed out waiting for the condition
	I1210 01:14:44.855115  133241 kubeadm.go:310] 
	I1210 01:14:44.855162  133241 kubeadm.go:310] 	This error is likely caused by:
	I1210 01:14:44.855217  133241 kubeadm.go:310] 		- The kubelet is not running
	I1210 01:14:44.855363  133241 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 01:14:44.855380  133241 kubeadm.go:310] 
	I1210 01:14:44.855514  133241 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 01:14:44.855565  133241 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 01:14:44.855615  133241 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 01:14:44.855625  133241 kubeadm.go:310] 
	I1210 01:14:44.855796  133241 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 01:14:44.855943  133241 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 01:14:44.855955  133241 kubeadm.go:310] 
	I1210 01:14:44.856139  133241 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 01:14:44.856299  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 01:14:44.856402  133241 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 01:14:44.856500  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 01:14:44.856525  133241 kubeadm.go:310] 
	I1210 01:14:44.856764  133241 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:14:44.856891  133241 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 01:14:44.856987  133241 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1210 01:14:44.857195  133241 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1210 01:14:44.857249  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1210 01:14:45.319104  133241 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 01:14:45.333243  133241 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1210 01:14:45.342637  133241 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1210 01:14:45.342653  133241 kubeadm.go:157] found existing configuration files:
	
	I1210 01:14:45.342696  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1210 01:14:45.351179  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1210 01:14:45.351227  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1210 01:14:45.359836  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1210 01:14:45.368986  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1210 01:14:45.369041  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1210 01:14:45.378166  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1210 01:14:45.387734  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1210 01:14:45.387781  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1210 01:14:45.397866  133241 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1210 01:14:45.406757  133241 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1210 01:14:45.406794  133241 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1210 01:14:45.416506  133241 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1210 01:14:45.484342  133241 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1210 01:14:45.484462  133241 kubeadm.go:310] [preflight] Running pre-flight checks
	I1210 01:14:45.624435  133241 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1210 01:14:45.624583  133241 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1210 01:14:45.624732  133241 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1210 01:14:45.800410  133241 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1210 01:14:45.802184  133241 out.go:235]   - Generating certificates and keys ...
	I1210 01:14:45.802296  133241 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1210 01:14:45.802393  133241 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1210 01:14:45.802504  133241 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1210 01:14:45.802601  133241 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1210 01:14:45.802707  133241 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1210 01:14:45.802780  133241 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1210 01:14:45.802867  133241 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1210 01:14:45.803320  133241 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1210 01:14:45.804003  133241 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1210 01:14:45.804623  133241 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1210 01:14:45.804904  133241 kubeadm.go:310] [certs] Using the existing "sa" key
	I1210 01:14:45.804997  133241 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1210 01:14:45.989500  133241 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1210 01:14:46.228462  133241 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1210 01:14:46.274395  133241 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1210 01:14:46.765291  133241 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1210 01:14:46.784318  133241 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1210 01:14:46.785620  133241 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1210 01:14:46.785694  133241 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1210 01:14:46.915963  133241 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1210 01:14:46.917607  133241 out.go:235]   - Booting up control plane ...
	I1210 01:14:46.917714  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1210 01:14:46.924564  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1210 01:14:46.925924  133241 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1210 01:14:46.926912  133241 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1210 01:14:46.929973  133241 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1210 01:15:26.932207  133241 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1210 01:15:26.932539  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:15:26.932718  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:15:31.933200  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:15:31.933463  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:15:41.934297  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:15:41.934592  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:16:01.935227  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:16:01.935409  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:16:41.934005  133241 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1210 01:16:41.934329  133241 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1210 01:16:41.934361  133241 kubeadm.go:310] 
	I1210 01:16:41.934433  133241 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1210 01:16:41.934492  133241 kubeadm.go:310] 		timed out waiting for the condition
	I1210 01:16:41.934500  133241 kubeadm.go:310] 
	I1210 01:16:41.934550  133241 kubeadm.go:310] 	This error is likely caused by:
	I1210 01:16:41.934610  133241 kubeadm.go:310] 		- The kubelet is not running
	I1210 01:16:41.934768  133241 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1210 01:16:41.934791  133241 kubeadm.go:310] 
	I1210 01:16:41.934915  133241 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1210 01:16:41.934971  133241 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1210 01:16:41.935024  133241 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1210 01:16:41.935033  133241 kubeadm.go:310] 
	I1210 01:16:41.935184  133241 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1210 01:16:41.935327  133241 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1210 01:16:41.935346  133241 kubeadm.go:310] 
	I1210 01:16:41.935485  133241 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1210 01:16:41.935600  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1210 01:16:41.935720  133241 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1210 01:16:41.935818  133241 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1210 01:16:41.935828  133241 kubeadm.go:310] 
	I1210 01:16:41.936518  133241 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1210 01:16:41.936630  133241 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1210 01:16:41.936756  133241 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1210 01:16:41.936849  133241 kubeadm.go:394] duration metric: took 7m57.690847315s to StartCluster
	I1210 01:16:41.936924  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1210 01:16:41.936994  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1210 01:16:41.979911  133241 cri.go:89] found id: ""
	I1210 01:16:41.979944  133241 logs.go:282] 0 containers: []
	W1210 01:16:41.979955  133241 logs.go:284] No container was found matching "kube-apiserver"
	I1210 01:16:41.979964  133241 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1210 01:16:41.980037  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1210 01:16:42.018336  133241 cri.go:89] found id: ""
	I1210 01:16:42.018366  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.018378  133241 logs.go:284] No container was found matching "etcd"
	I1210 01:16:42.018385  133241 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1210 01:16:42.018461  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1210 01:16:42.050036  133241 cri.go:89] found id: ""
	I1210 01:16:42.050065  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.050074  133241 logs.go:284] No container was found matching "coredns"
	I1210 01:16:42.050080  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1210 01:16:42.050139  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1210 01:16:42.083023  133241 cri.go:89] found id: ""
	I1210 01:16:42.083051  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.083063  133241 logs.go:284] No container was found matching "kube-scheduler"
	I1210 01:16:42.083072  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1210 01:16:42.083131  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1210 01:16:42.117900  133241 cri.go:89] found id: ""
	I1210 01:16:42.117921  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.117930  133241 logs.go:284] No container was found matching "kube-proxy"
	I1210 01:16:42.117936  133241 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1210 01:16:42.117982  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1210 01:16:42.150009  133241 cri.go:89] found id: ""
	I1210 01:16:42.150041  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.150054  133241 logs.go:284] No container was found matching "kube-controller-manager"
	I1210 01:16:42.150063  133241 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1210 01:16:42.150116  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1210 01:16:42.182606  133241 cri.go:89] found id: ""
	I1210 01:16:42.182632  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.182643  133241 logs.go:284] No container was found matching "kindnet"
	I1210 01:16:42.182650  133241 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1210 01:16:42.182712  133241 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1210 01:16:42.223456  133241 cri.go:89] found id: ""
	I1210 01:16:42.223486  133241 logs.go:282] 0 containers: []
	W1210 01:16:42.223496  133241 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1210 01:16:42.223507  133241 logs.go:123] Gathering logs for kubelet ...
	I1210 01:16:42.223522  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1210 01:16:42.287081  133241 logs.go:123] Gathering logs for dmesg ...
	I1210 01:16:42.287118  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1210 01:16:42.308277  133241 logs.go:123] Gathering logs for describe nodes ...
	I1210 01:16:42.308315  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1210 01:16:42.401928  133241 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1210 01:16:42.401960  133241 logs.go:123] Gathering logs for CRI-O ...
	I1210 01:16:42.401977  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1210 01:16:42.515786  133241 logs.go:123] Gathering logs for container status ...
	I1210 01:16:42.515829  133241 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1210 01:16:42.551865  133241 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1210 01:16:42.551924  133241 out.go:270] * 
	W1210 01:16:42.552001  133241 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 01:16:42.552019  133241 out.go:270] * 
	W1210 01:16:42.552906  133241 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1210 01:16:42.556458  133241 out.go:201] 
	W1210 01:16:42.557556  133241 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1210 01:16:42.557619  133241 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1210 01:16:42.557649  133241 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1210 01:16:42.559020  133241 out.go:201] 
	
	
	==> CRI-O <==
	Dec 10 01:28:27 old-k8s-version-094470 crio[632]: time="2024-12-10 01:28:27.423359575Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794107423332014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac5c71cb-f58b-41d2-a4ba-5d85bb8b7542 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:28:27 old-k8s-version-094470 crio[632]: time="2024-12-10 01:28:27.423863115Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a904a78-4b4f-4b43-b41d-42a799873312 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:27 old-k8s-version-094470 crio[632]: time="2024-12-10 01:28:27.423933337Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a904a78-4b4f-4b43-b41d-42a799873312 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:27 old-k8s-version-094470 crio[632]: time="2024-12-10 01:28:27.423967092Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8a904a78-4b4f-4b43-b41d-42a799873312 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:27 old-k8s-version-094470 crio[632]: time="2024-12-10 01:28:27.462035067Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=40e00bcc-ca57-4164-821a-94e54035871b name=/runtime.v1.RuntimeService/Version
	Dec 10 01:28:27 old-k8s-version-094470 crio[632]: time="2024-12-10 01:28:27.462147114Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=40e00bcc-ca57-4164-821a-94e54035871b name=/runtime.v1.RuntimeService/Version
	Dec 10 01:28:27 old-k8s-version-094470 crio[632]: time="2024-12-10 01:28:27.463340774Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74a7a88b-42d9-49b6-a5a3-d6c148c37eb2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:28:27 old-k8s-version-094470 crio[632]: time="2024-12-10 01:28:27.463959384Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794107463928557,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74a7a88b-42d9-49b6-a5a3-d6c148c37eb2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:28:27 old-k8s-version-094470 crio[632]: time="2024-12-10 01:28:27.464602675Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=044a1040-9cc0-40e6-a87b-6e73f8467b40 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:27 old-k8s-version-094470 crio[632]: time="2024-12-10 01:28:27.464671725Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=044a1040-9cc0-40e6-a87b-6e73f8467b40 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:27 old-k8s-version-094470 crio[632]: time="2024-12-10 01:28:27.464730211Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=044a1040-9cc0-40e6-a87b-6e73f8467b40 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:27 old-k8s-version-094470 crio[632]: time="2024-12-10 01:28:27.498976110Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=66e8f498-d3a5-4df9-8098-a4f89c826eca name=/runtime.v1.RuntimeService/Version
	Dec 10 01:28:27 old-k8s-version-094470 crio[632]: time="2024-12-10 01:28:27.499077324Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=66e8f498-d3a5-4df9-8098-a4f89c826eca name=/runtime.v1.RuntimeService/Version
	Dec 10 01:28:27 old-k8s-version-094470 crio[632]: time="2024-12-10 01:28:27.500286045Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=00d8f5f2-ce10-4dcc-b929-4b0e28e42d10 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:28:27 old-k8s-version-094470 crio[632]: time="2024-12-10 01:28:27.500883245Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794107500841777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=00d8f5f2-ce10-4dcc-b929-4b0e28e42d10 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:28:27 old-k8s-version-094470 crio[632]: time="2024-12-10 01:28:27.501675484Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8bbb4e2a-b5ea-4b8e-bda8-3f1415bcebff name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:27 old-k8s-version-094470 crio[632]: time="2024-12-10 01:28:27.501755616Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8bbb4e2a-b5ea-4b8e-bda8-3f1415bcebff name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:27 old-k8s-version-094470 crio[632]: time="2024-12-10 01:28:27.501805790Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8bbb4e2a-b5ea-4b8e-bda8-3f1415bcebff name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:27 old-k8s-version-094470 crio[632]: time="2024-12-10 01:28:27.538694539Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=deb22b5d-89b7-4e75-aaae-cba4a51d2810 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:28:27 old-k8s-version-094470 crio[632]: time="2024-12-10 01:28:27.538789250Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=deb22b5d-89b7-4e75-aaae-cba4a51d2810 name=/runtime.v1.RuntimeService/Version
	Dec 10 01:28:27 old-k8s-version-094470 crio[632]: time="2024-12-10 01:28:27.539935775Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c7e95d34-b259-4281-a494-1c8b5574e362 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:28:27 old-k8s-version-094470 crio[632]: time="2024-12-10 01:28:27.540390831Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733794107540368418,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7e95d34-b259-4281-a494-1c8b5574e362 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 10 01:28:27 old-k8s-version-094470 crio[632]: time="2024-12-10 01:28:27.541064859Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f553c5f-cb64-42d0-aa88-3252117ec347 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:27 old-k8s-version-094470 crio[632]: time="2024-12-10 01:28:27.541117578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f553c5f-cb64-42d0-aa88-3252117ec347 name=/runtime.v1.RuntimeService/ListContainers
	Dec 10 01:28:27 old-k8s-version-094470 crio[632]: time="2024-12-10 01:28:27.541154416Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3f553c5f-cb64-42d0-aa88-3252117ec347 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec10 01:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.058441] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038433] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.955123] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.919200] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.577947] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.210341] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.056035] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052496] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.200301] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.121921] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.235690] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +5.849695] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.064134] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.756376] systemd-fstab-generator[1004]: Ignoring "noauto" option for root device
	[ +13.680417] kauditd_printk_skb: 46 callbacks suppressed
	[Dec10 01:12] systemd-fstab-generator[5121]: Ignoring "noauto" option for root device
	[Dec10 01:14] systemd-fstab-generator[5409]: Ignoring "noauto" option for root device
	[  +0.065463] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:28:27 up 20 min,  0 users,  load average: 0.04, 0.06, 0.07
	Linux old-k8s-version-094470 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 10 01:28:23 old-k8s-version-094470 kubelet[6929]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Dec 10 01:28:23 old-k8s-version-094470 kubelet[6929]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000c34780, 0xc000c625a0)
	Dec 10 01:28:23 old-k8s-version-094470 kubelet[6929]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Dec 10 01:28:23 old-k8s-version-094470 kubelet[6929]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Dec 10 01:28:23 old-k8s-version-094470 kubelet[6929]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Dec 10 01:28:23 old-k8s-version-094470 kubelet[6929]: goroutine 113 [runnable]:
	Dec 10 01:28:23 old-k8s-version-094470 kubelet[6929]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0000cddc0)
	Dec 10 01:28:23 old-k8s-version-094470 kubelet[6929]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1242
	Dec 10 01:28:23 old-k8s-version-094470 kubelet[6929]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Dec 10 01:28:23 old-k8s-version-094470 kubelet[6929]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Dec 10 01:28:23 old-k8s-version-094470 kubelet[6929]: goroutine 162 [runnable]:
	Dec 10 01:28:23 old-k8s-version-094470 kubelet[6929]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0000cddc0)
	Dec 10 01:28:23 old-k8s-version-094470 kubelet[6929]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344
	Dec 10 01:28:23 old-k8s-version-094470 kubelet[6929]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Dec 10 01:28:23 old-k8s-version-094470 kubelet[6929]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Dec 10 01:28:23 old-k8s-version-094470 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 10 01:28:23 old-k8s-version-094470 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 10 01:28:24 old-k8s-version-094470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 142.
	Dec 10 01:28:24 old-k8s-version-094470 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 10 01:28:24 old-k8s-version-094470 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 10 01:28:24 old-k8s-version-094470 kubelet[6938]: I1210 01:28:24.080993    6938 server.go:416] Version: v1.20.0
	Dec 10 01:28:24 old-k8s-version-094470 kubelet[6938]: I1210 01:28:24.081354    6938 server.go:837] Client rotation is on, will bootstrap in background
	Dec 10 01:28:24 old-k8s-version-094470 kubelet[6938]: I1210 01:28:24.084383    6938 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 10 01:28:24 old-k8s-version-094470 kubelet[6938]: W1210 01:28:24.085701    6938 manager.go:159] Cannot detect current cgroup on cgroup v2
	Dec 10 01:28:24 old-k8s-version-094470 kubelet[6938]: I1210 01:28:24.086611    6938 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-094470 -n old-k8s-version-094470
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-094470 -n old-k8s-version-094470: exit status 2 (251.770567ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-094470" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (160.37s)

                                                
                                    

Test pass (243/314)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.2/json-events 4.28
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.06
18 TestDownloadOnly/v1.31.2/DeleteAll 0.13
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 82.75
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 126.1
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 9.5
35 TestAddons/parallel/Registry 16.18
37 TestAddons/parallel/InspektorGadget 11.81
40 TestAddons/parallel/CSI 65.9
41 TestAddons/parallel/Headlamp 17.72
42 TestAddons/parallel/CloudSpanner 6.72
43 TestAddons/parallel/LocalPath 11.35
44 TestAddons/parallel/NvidiaDevicePlugin 5.68
45 TestAddons/parallel/Yakd 11.7
48 TestCertOptions 49.02
49 TestCertExpiration 294.79
51 TestForceSystemdFlag 69.14
52 TestForceSystemdEnv 53.81
54 TestKVMDriverInstallOrUpdate 3.78
58 TestErrorSpam/setup 41.43
59 TestErrorSpam/start 0.34
60 TestErrorSpam/status 0.71
61 TestErrorSpam/pause 1.47
62 TestErrorSpam/unpause 1.63
63 TestErrorSpam/stop 4.74
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 82.73
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 397.95
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.2
75 TestFunctional/serial/CacheCmd/cache/add_local 1.87
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.58
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 56.04
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.39
86 TestFunctional/serial/LogsFileCmd 1.43
87 TestFunctional/serial/InvalidService 4.42
89 TestFunctional/parallel/ConfigCmd 0.36
90 TestFunctional/parallel/DashboardCmd 13.59
91 TestFunctional/parallel/DryRun 0.28
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 1.03
97 TestFunctional/parallel/ServiceCmdConnect 7.44
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 42.27
101 TestFunctional/parallel/SSHCmd 0.4
102 TestFunctional/parallel/CpCmd 1.38
103 TestFunctional/parallel/MySQL 23.66
104 TestFunctional/parallel/FileSync 0.19
105 TestFunctional/parallel/CertSync 1.38
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.39
113 TestFunctional/parallel/License 0.15
114 TestFunctional/parallel/ServiceCmd/DeployApp 11.17
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
116 TestFunctional/parallel/ProfileCmd/profile_list 0.39
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
118 TestFunctional/parallel/MountCmd/any-port 8.54
119 TestFunctional/parallel/MountCmd/specific-port 1.84
120 TestFunctional/parallel/ServiceCmd/List 0.51
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.56
122 TestFunctional/parallel/MountCmd/VerifyCleanup 1.47
123 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
124 TestFunctional/parallel/ServiceCmd/Format 0.56
125 TestFunctional/parallel/ServiceCmd/URL 0.36
126 TestFunctional/parallel/Version/short 0.05
127 TestFunctional/parallel/Version/components 0.62
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.5
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.55
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.51
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.61
132 TestFunctional/parallel/ImageCommands/ImageBuild 10.21
133 TestFunctional/parallel/ImageCommands/Setup 1.65
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.62
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.92
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.8
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 3.02
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.77
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.64
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 199.55
160 TestMultiControlPlane/serial/DeployApp 5.83
161 TestMultiControlPlane/serial/PingHostFromPods 1.12
162 TestMultiControlPlane/serial/AddWorkerNode 56.48
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.81
165 TestMultiControlPlane/serial/CopyFile 12.42
171 TestMultiControlPlane/serial/DeleteSecondaryNode 16.66
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.61
174 TestMultiControlPlane/serial/RestartCluster 314.45
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.61
176 TestMultiControlPlane/serial/AddSecondaryNode 72.18
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.81
181 TestJSONOutput/start/Command 53.37
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.64
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.58
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 7.36
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.2
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 84.18
213 TestMountStart/serial/StartWithMountFirst 27.03
214 TestMountStart/serial/VerifyMountFirst 0.36
215 TestMountStart/serial/StartWithMountSecond 26.82
216 TestMountStart/serial/VerifyMountSecond 0.37
217 TestMountStart/serial/DeleteFirst 0.69
218 TestMountStart/serial/VerifyMountPostDelete 0.36
219 TestMountStart/serial/Stop 1.27
220 TestMountStart/serial/RestartStopped 22.75
221 TestMountStart/serial/VerifyMountPostStop 0.36
224 TestMultiNode/serial/FreshStart2Nodes 112.56
225 TestMultiNode/serial/DeployApp2Nodes 4.57
226 TestMultiNode/serial/PingHostFrom2Pods 0.75
227 TestMultiNode/serial/AddNode 51.13
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.55
230 TestMultiNode/serial/CopyFile 7.02
231 TestMultiNode/serial/StopNode 2.2
232 TestMultiNode/serial/StartAfterStop 38.41
234 TestMultiNode/serial/DeleteNode 2.25
236 TestMultiNode/serial/RestartMultiNode 178.15
237 TestMultiNode/serial/ValidateNameConflict 41.69
244 TestScheduledStopUnix 110.15
248 TestRunningBinaryUpgrade 203.11
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
261 TestNoKubernetes/serial/StartWithK8s 90.25
262 TestNoKubernetes/serial/StartWithStopK8s 37.49
263 TestNoKubernetes/serial/Start 28.99
271 TestNetworkPlugins/group/false 3.09
275 TestStoppedBinaryUpgrade/Setup 0.54
276 TestStoppedBinaryUpgrade/Upgrade 97.14
277 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
278 TestNoKubernetes/serial/ProfileList 28.2
279 TestNoKubernetes/serial/Stop 2.82
280 TestNoKubernetes/serial/StartNoArgs 20.95
282 TestPause/serial/Start 109.4
283 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
284 TestStoppedBinaryUpgrade/MinikubeLogs 0.79
289 TestStartStop/group/no-preload/serial/FirstStart 133.24
291 TestStartStop/group/embed-certs/serial/FirstStart 115.64
292 TestStartStop/group/no-preload/serial/DeployApp 9.29
294 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 77.65
295 TestStartStop/group/embed-certs/serial/DeployApp 9.3
296 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.99
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.03
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.29
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.88
307 TestStartStop/group/no-preload/serial/SecondStart 641.63
308 TestStartStop/group/embed-certs/serial/SecondStart 601.82
310 TestStartStop/group/old-k8s-version/serial/Stop 5.29
311 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
313 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 563.03
323 TestStartStop/group/newest-cni/serial/FirstStart 43.47
324 TestNetworkPlugins/group/auto/Start 107.18
325 TestNetworkPlugins/group/kindnet/Start 88.9
326 TestStartStop/group/newest-cni/serial/DeployApp 0
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.01
328 TestStartStop/group/newest-cni/serial/Stop 11.81
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
330 TestStartStop/group/newest-cni/serial/SecondStart 53.38
331 TestNetworkPlugins/group/auto/KubeletFlags 0.23
332 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
333 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
335 TestNetworkPlugins/group/auto/NetCatPod 10.24
336 TestStartStop/group/newest-cni/serial/Pause 2.93
337 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
338 TestNetworkPlugins/group/calico/Start 79.74
339 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
340 TestNetworkPlugins/group/kindnet/NetCatPod 10.21
341 TestNetworkPlugins/group/auto/DNS 0.18
342 TestNetworkPlugins/group/auto/Localhost 0.12
343 TestNetworkPlugins/group/auto/HairPin 0.17
344 TestNetworkPlugins/group/kindnet/DNS 0.16
345 TestNetworkPlugins/group/kindnet/Localhost 0.12
346 TestNetworkPlugins/group/kindnet/HairPin 0.12
347 TestNetworkPlugins/group/custom-flannel/Start 80.9
348 TestNetworkPlugins/group/enable-default-cni/Start 86.06
349 TestNetworkPlugins/group/flannel/Start 122.15
350 TestNetworkPlugins/group/calico/ControllerPod 6.01
351 TestNetworkPlugins/group/calico/KubeletFlags 0.3
352 TestNetworkPlugins/group/calico/NetCatPod 13.3
353 TestNetworkPlugins/group/calico/DNS 0.17
354 TestNetworkPlugins/group/calico/Localhost 0.13
355 TestNetworkPlugins/group/calico/HairPin 0.17
356 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
357 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.21
358 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
359 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.31
360 TestNetworkPlugins/group/custom-flannel/DNS 0.37
361 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
362 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
363 TestNetworkPlugins/group/bridge/Start 90.74
364 TestNetworkPlugins/group/enable-default-cni/DNS 16.17
365 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
366 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
367 TestNetworkPlugins/group/flannel/ControllerPod 6.01
368 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
369 TestNetworkPlugins/group/flannel/NetCatPod 10.24
370 TestNetworkPlugins/group/flannel/DNS 0.17
371 TestNetworkPlugins/group/flannel/Localhost 0.14
372 TestNetworkPlugins/group/flannel/HairPin 0.13
373 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
374 TestNetworkPlugins/group/bridge/NetCatPod 10.23
375 TestNetworkPlugins/group/bridge/DNS 0.14
376 TestNetworkPlugins/group/bridge/Localhost 0.11
377 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-279229 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-279229 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.995639006s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1209 23:43:34.956383   86296 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1209 23:43:34.956506   86296 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-279229
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-279229: exit status 85 (59.600935ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-279229 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC |          |
	|         | -p download-only-279229        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:43:27
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:43:27.001901   86307 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:43:27.002027   86307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:43:27.002039   86307 out.go:358] Setting ErrFile to fd 2...
	I1209 23:43:27.002046   86307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:43:27.002234   86307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	W1209 23:43:27.002376   86307 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20062-79135/.minikube/config/config.json: open /home/jenkins/minikube-integration/20062-79135/.minikube/config/config.json: no such file or directory
	I1209 23:43:27.003015   86307 out.go:352] Setting JSON to true
	I1209 23:43:27.003883   86307 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5158,"bootTime":1733782649,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:43:27.003991   86307 start.go:139] virtualization: kvm guest
	I1209 23:43:27.006269   86307 out.go:97] [download-only-279229] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1209 23:43:27.006387   86307 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball: no such file or directory
	I1209 23:43:27.006420   86307 notify.go:220] Checking for updates...
	I1209 23:43:27.007881   86307 out.go:169] MINIKUBE_LOCATION=20062
	I1209 23:43:27.009194   86307 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:43:27.010418   86307 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1209 23:43:27.011754   86307 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1209 23:43:27.012887   86307 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1209 23:43:27.015039   86307 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 23:43:27.015253   86307 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 23:43:27.048517   86307 out.go:97] Using the kvm2 driver based on user configuration
	I1209 23:43:27.048558   86307 start.go:297] selected driver: kvm2
	I1209 23:43:27.048569   86307 start.go:901] validating driver "kvm2" against <nil>
	I1209 23:43:27.048933   86307 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:43:27.049016   86307 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20062-79135/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1209 23:43:27.063627   86307 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1209 23:43:27.063685   86307 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 23:43:27.064195   86307 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1209 23:43:27.064350   86307 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 23:43:27.064411   86307 cni.go:84] Creating CNI manager for ""
	I1209 23:43:27.064474   86307 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1209 23:43:27.064488   86307 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 23:43:27.064542   86307 start.go:340] cluster config:
	{Name:download-only-279229 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-279229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 23:43:27.064751   86307 iso.go:125] acquiring lock: {Name:mkf3c3ca721ad8ae5cace43ebd4c4f8776544264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 23:43:27.066486   86307 out.go:97] Downloading VM boot image ...
	I1209 23:43:27.066538   86307 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20062-79135/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1209 23:43:30.381224   86307 out.go:97] Starting "download-only-279229" primary control-plane node in "download-only-279229" cluster
	I1209 23:43:30.381247   86307 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 23:43:30.408236   86307 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1209 23:43:30.408256   86307 cache.go:56] Caching tarball of preloaded images
	I1209 23:43:30.408398   86307 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1209 23:43:30.409973   86307 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1209 23:43:30.409989   86307 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1209 23:43:30.436730   86307 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-279229 host does not exist
	  To start a cluster, run: "minikube start -p download-only-279229"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-279229
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (4.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-539681 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-539681 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.276307515s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (4.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1209 23:43:39.550317   86296 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1209 23:43:39.550364   86296 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-79135/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-539681
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-539681: exit status 85 (60.685204ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-279229 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC |                     |
	|         | -p download-only-279229        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC | 09 Dec 24 23:43 UTC |
	| delete  | -p download-only-279229        | download-only-279229 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC | 09 Dec 24 23:43 UTC |
	| start   | -o=json --download-only        | download-only-539681 | jenkins | v1.34.0 | 09 Dec 24 23:43 UTC |                     |
	|         | -p download-only-539681        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 23:43:35
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 23:43:35.314954   86517 out.go:345] Setting OutFile to fd 1 ...
	I1209 23:43:35.315086   86517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:43:35.315097   86517 out.go:358] Setting ErrFile to fd 2...
	I1209 23:43:35.315103   86517 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 23:43:35.315289   86517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1209 23:43:35.315820   86517 out.go:352] Setting JSON to true
	I1209 23:43:35.316629   86517 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5166,"bootTime":1733782649,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1209 23:43:35.316681   86517 start.go:139] virtualization: kvm guest
	I1209 23:43:35.318782   86517 out.go:97] [download-only-539681] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1209 23:43:35.318917   86517 notify.go:220] Checking for updates...
	I1209 23:43:35.320324   86517 out.go:169] MINIKUBE_LOCATION=20062
	I1209 23:43:35.321600   86517 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 23:43:35.322777   86517 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1209 23:43:35.323923   86517 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1209 23:43:35.325058   86517 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-539681 host does not exist
	  To start a cluster, run: "minikube start -p download-only-539681"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-539681
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1209 23:43:40.114968   86296 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-419481 --alsologtostderr --binary-mirror http://127.0.0.1:41707 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-419481" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-419481
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (82.75s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-942783 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-942783 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m21.842696356s)
helpers_test.go:175: Cleaning up "offline-crio-942783" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-942783
--- PASS: TestOffline (82.75s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-327804
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-327804: exit status 85 (52.881717ms)

                                                
                                                
-- stdout --
	* Profile "addons-327804" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-327804"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-327804
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-327804: exit status 85 (52.168501ms)

                                                
                                                
-- stdout --
	* Profile "addons-327804" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-327804"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (126.1s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-327804 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-327804 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m6.102176699s)
--- PASS: TestAddons/Setup (126.10s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-327804 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-327804 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.5s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-327804 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-327804 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9c2cba33-a47e-457a-a491-52d554257a4e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9c2cba33-a47e-457a-a491-52d554257a4e] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004272026s
addons_test.go:633: (dbg) Run:  kubectl --context addons-327804 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-327804 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-327804 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.50s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 2.840666ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5cc95cd69-sr6kt" [38920e52-e20a-4542-af24-1efcde928cf7] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003961313s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rft2s" [6ff74e8e-3b66-4249-984f-1c881b667876] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00360084s
addons_test.go:331: (dbg) Run:  kubectl --context addons-327804 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-327804 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-327804 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.435292599s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-327804 ip
2024/12/09 23:46:42 [DEBUG] GET http://192.168.39.22:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-327804 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.18s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-4mxq6" [d6f48aa5-3dd3-45d9-a603-ad5a4f3fa7fa] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004498228s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-327804 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-327804 addons disable inspektor-gadget --alsologtostderr -v=1: (5.802454217s)
--- PASS: TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (65.9s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1209 23:46:26.946017   86296 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 11.510677ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-327804 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-327804 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [aea526a0-2310-4bd7-a3fa-ab293e505b80] Pending
helpers_test.go:344: "task-pv-pod" [aea526a0-2310-4bd7-a3fa-ab293e505b80] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [aea526a0-2310-4bd7-a3fa-ab293e505b80] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.0037691s
addons_test.go:511: (dbg) Run:  kubectl --context addons-327804 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-327804 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-327804 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-327804 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-327804 delete pod task-pv-pod: (1.196149249s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-327804 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-327804 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-327804 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d219c1ab-52ca-4d79-8e0c-1e31958bfda8] Pending
helpers_test.go:344: "task-pv-pod-restore" [d219c1ab-52ca-4d79-8e0c-1e31958bfda8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d219c1ab-52ca-4d79-8e0c-1e31958bfda8] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004224921s
addons_test.go:553: (dbg) Run:  kubectl --context addons-327804 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-327804 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-327804 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-327804 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-327804 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-327804 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.712002923s)
--- PASS: TestAddons/parallel/CSI (65.90s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-327804 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-ssqk5" [81513400-968c-4d44-a354-cb313a8e5f51] Pending
helpers_test.go:344: "headlamp-cd8ffd6fc-ssqk5" [81513400-968c-4d44-a354-cb313a8e5f51] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-ssqk5" [81513400-968c-4d44-a354-cb313a8e5f51] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004476817s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-327804 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-327804 addons disable headlamp --alsologtostderr -v=1: (5.89149472s)
--- PASS: TestAddons/parallel/Headlamp (17.72s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-qk52r" [951c8285-0c7d-44dc-ac5f-be8bf548393e] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004298999s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-327804 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.72s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (11.35s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-327804 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-327804 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-327804 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [5ac7fe58-d39c-47a5-bc90-eec477db1484] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [5ac7fe58-d39c-47a5-bc90-eec477db1484] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [5ac7fe58-d39c-47a5-bc90-eec477db1484] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.030642829s
addons_test.go:906: (dbg) Run:  kubectl --context addons-327804 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-327804 ssh "cat /opt/local-path-provisioner/pvc-d933e89a-c1b5-434b-bf3c-35e985eb04c2_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-327804 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-327804 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-327804 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (11.35s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4fmgx" [a89eaf64-40a3-4ab2-a394-a852c6a26f53] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004550983s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-327804 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.68s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-xblfv" [ebf0240d-4fdb-49ed-be00-7bbe2986daec] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003085029s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-327804 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-327804 addons disable yakd --alsologtostderr -v=1: (5.697534614s)
--- PASS: TestAddons/parallel/Yakd (11.70s)

                                                
                                    
x
+
TestCertOptions (49.02s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-086522 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-086522 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (46.839399134s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-086522 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-086522 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-086522 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-086522" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-086522
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-086522: (1.17954695s)
--- PASS: TestCertOptions (49.02s)

                                                
                                    
x
+
TestCertExpiration (294.79s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-290541 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-290541 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m21.461492853s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-290541 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-290541 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (32.303737319s)
helpers_test.go:175: Cleaning up "cert-expiration-290541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-290541
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-290541: (1.027159044s)
--- PASS: TestCertExpiration (294.79s)

                                                
                                    
x
+
TestForceSystemdFlag (69.14s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-887293 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-887293 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m7.946114338s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-887293 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-887293" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-887293
--- PASS: TestForceSystemdFlag (69.14s)

                                                
                                    
x
+
TestForceSystemdEnv (53.81s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-933327 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-933327 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (52.760832386s)
helpers_test.go:175: Cleaning up "force-systemd-env-933327" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-933327
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-933327: (1.04469977s)
--- PASS: TestForceSystemdEnv (53.81s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.78s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1210 00:54:45.709488   86296 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1210 00:54:45.709621   86296 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1210 00:54:45.737227   86296 install.go:62] docker-machine-driver-kvm2: exit status 1
W1210 00:54:45.737566   86296 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1210 00:54:45.737656   86296 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3440436290/001/docker-machine-driver-kvm2
I1210 00:54:45.948687   86296 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3440436290/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020] Decompressors:map[bz2:0xc000783570 gz:0xc000783578 tar:0xc000783520 tar.bz2:0xc000783530 tar.gz:0xc000783540 tar.xz:0xc000783550 tar.zst:0xc000783560 tbz2:0xc000783530 tgz:0xc000783540 txz:0xc000783550 tzst:0xc000783560 xz:0xc000783580 zip:0xc000783590 zst:0xc000783588] Getters:map[file:0xc000714040 http:0xc0007cea00 https:0xc0007cea50] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1210 00:54:45.948748   86296 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3440436290/001/docker-machine-driver-kvm2
I1210 00:54:47.795880   86296 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1210 00:54:47.795963   86296 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1210 00:54:47.823515   86296 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1210 00:54:47.823544   86296 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1210 00:54:47.823608   86296 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1210 00:54:47.823632   86296 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3440436290/002/docker-machine-driver-kvm2
I1210 00:54:47.876330   86296 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3440436290/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020 0x5316020] Decompressors:map[bz2:0xc000783570 gz:0xc000783578 tar:0xc000783520 tar.bz2:0xc000783530 tar.gz:0xc000783540 tar.xz:0xc000783550 tar.zst:0xc000783560 tbz2:0xc000783530 tgz:0xc000783540 txz:0xc000783550 tzst:0xc000783560 xz:0xc000783580 zip:0xc000783590 zst:0xc000783588] Getters:map[file:0xc0006ebde0 http:0xc0007cab40 https:0xc0007cab90] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1210 00:54:47.876387   86296 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3440436290/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.78s)

                                                
                                    
x
+
TestErrorSpam/setup (41.43s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-244574 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-244574 --driver=kvm2  --container-runtime=crio
E1209 23:55:47.491232   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:55:47.497716   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:55:47.509104   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:55:47.530652   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:55:47.572093   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:55:47.653553   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:55:47.815111   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:55:48.136808   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-244574 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-244574 --driver=kvm2  --container-runtime=crio: (41.425328135s)
--- PASS: TestErrorSpam/setup (41.43s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244574 --log_dir /tmp/nospam-244574 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244574 --log_dir /tmp/nospam-244574 start --dry-run
E1209 23:55:48.778064   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244574 --log_dir /tmp/nospam-244574 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244574 --log_dir /tmp/nospam-244574 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244574 --log_dir /tmp/nospam-244574 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244574 --log_dir /tmp/nospam-244574 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244574 --log_dir /tmp/nospam-244574 pause
E1209 23:55:50.060018   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244574 --log_dir /tmp/nospam-244574 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244574 --log_dir /tmp/nospam-244574 pause
--- PASS: TestErrorSpam/pause (1.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244574 --log_dir /tmp/nospam-244574 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244574 --log_dir /tmp/nospam-244574 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244574 --log_dir /tmp/nospam-244574 unpause
E1209 23:55:52.622220   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestErrorSpam/unpause (1.63s)

                                                
                                    
x
+
TestErrorSpam/stop (4.74s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244574 --log_dir /tmp/nospam-244574 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-244574 --log_dir /tmp/nospam-244574 stop: (1.583707038s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244574 --log_dir /tmp/nospam-244574 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-244574 --log_dir /tmp/nospam-244574 stop: (1.833851503s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-244574 --log_dir /tmp/nospam-244574 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-244574 --log_dir /tmp/nospam-244574 stop: (1.323148622s)
--- PASS: TestErrorSpam/stop (4.74s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20062-79135/.minikube/files/etc/test/nested/copy/86296/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (82.73s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-551825 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1209 23:56:07.986199   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:56:28.468313   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:57:09.430921   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-551825 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m22.72506227s)
--- PASS: TestFunctional/serial/StartWithProxy (82.73s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (397.95s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1209 23:57:20.607463   86296 config.go:182] Loaded profile config "functional-551825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-551825 --alsologtostderr -v=8
E1209 23:58:31.355613   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:00:47.491099   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:01:15.197972   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-551825 --alsologtostderr -v=8: (6m37.950809629s)
functional_test.go:663: soft start took 6m37.951634707s for "functional-551825" cluster.
I1210 00:03:58.558765   86296 config.go:182] Loaded profile config "functional-551825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (397.95s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-551825 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-551825 cache add registry.k8s.io/pause:3.1: (1.023971914s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-551825 cache add registry.k8s.io/pause:3.3: (1.108539181s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-551825 cache add registry.k8s.io/pause:latest: (1.063156475s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-551825 /tmp/TestFunctionalserialCacheCmdcacheadd_local2534444319/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 cache add minikube-local-cache-test:functional-551825
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-551825 cache add minikube-local-cache-test:functional-551825: (1.56588031s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 cache delete minikube-local-cache-test:functional-551825
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-551825
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-551825 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (214.060467ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 kubectl -- --context functional-551825 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-551825 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (56.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-551825 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-551825 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (56.041611648s)
functional_test.go:761: restart took 56.041772425s for "functional-551825" cluster.
I1210 00:05:01.980288   86296 config.go:182] Loaded profile config "functional-551825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (56.04s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-551825 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-551825 logs: (1.390668623s)
--- PASS: TestFunctional/serial/LogsCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 logs --file /tmp/TestFunctionalserialLogsFileCmd3360785742/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-551825 logs --file /tmp/TestFunctionalserialLogsFileCmd3360785742/001/logs.txt: (1.425796611s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.42s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-551825 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-551825
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-551825: exit status 115 (273.142561ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.69:31583 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-551825 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.42s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-551825 config get cpus: exit status 14 (66.248073ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-551825 config get cpus: exit status 14 (52.35034ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-551825 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-551825 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 95891: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.59s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-551825 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-551825 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (144.116448ms)

                                                
                                                
-- stdout --
	* [functional-551825] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 00:05:12.053855   95772 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:05:12.054172   95772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:05:12.054187   95772 out.go:358] Setting ErrFile to fd 2...
	I1210 00:05:12.054195   95772 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:05:12.054478   95772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 00:05:12.055204   95772 out.go:352] Setting JSON to false
	I1210 00:05:12.056499   95772 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6463,"bootTime":1733782649,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:05:12.056584   95772 start.go:139] virtualization: kvm guest
	I1210 00:05:12.058409   95772 out.go:177] * [functional-551825] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 00:05:12.060055   95772 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 00:05:12.060077   95772 notify.go:220] Checking for updates...
	I1210 00:05:12.063594   95772 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:05:12.064719   95772 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:05:12.065906   95772 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:05:12.066905   95772 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:05:12.067884   95772 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:05:12.069218   95772 config.go:182] Loaded profile config "functional-551825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:05:12.069613   95772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:05:12.069662   95772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:12.086423   95772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40803
	I1210 00:05:12.086881   95772 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:12.087325   95772 main.go:141] libmachine: Using API Version  1
	I1210 00:05:12.087365   95772 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:12.087806   95772 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:12.087987   95772 main.go:141] libmachine: (functional-551825) Calling .DriverName
	I1210 00:05:12.088261   95772 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:05:12.088710   95772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:05:12.088772   95772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:12.105084   95772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35869
	I1210 00:05:12.105602   95772 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:12.106235   95772 main.go:141] libmachine: Using API Version  1
	I1210 00:05:12.106289   95772 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:12.106716   95772 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:12.106958   95772 main.go:141] libmachine: (functional-551825) Calling .DriverName
	I1210 00:05:12.140146   95772 out.go:177] * Using the kvm2 driver based on existing profile
	I1210 00:05:12.141425   95772 start.go:297] selected driver: kvm2
	I1210 00:05:12.141442   95772 start.go:901] validating driver "kvm2" against &{Name:functional-551825 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-551825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:05:12.141560   95772 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:05:12.143649   95772 out.go:201] 
	W1210 00:05:12.144825   95772 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1210 00:05:12.145985   95772 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-551825 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-551825 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-551825 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (152.832001ms)

                                                
                                                
-- stdout --
	* [functional-551825] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 00:05:11.911035   95729 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:05:11.911163   95729 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:05:11.911175   95729 out.go:358] Setting ErrFile to fd 2...
	I1210 00:05:11.911182   95729 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:05:11.911578   95729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 00:05:11.912269   95729 out.go:352] Setting JSON to false
	I1210 00:05:11.913680   95729 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6463,"bootTime":1733782649,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:05:11.913766   95729 start.go:139] virtualization: kvm guest
	I1210 00:05:11.916135   95729 out.go:177] * [functional-551825] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1210 00:05:11.917521   95729 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 00:05:11.917582   95729 notify.go:220] Checking for updates...
	I1210 00:05:11.919871   95729 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:05:11.921072   95729 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:05:11.922303   95729 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:05:11.923523   95729 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:05:11.924932   95729 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:05:11.926687   95729 config.go:182] Loaded profile config "functional-551825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:05:11.927319   95729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:05:11.927418   95729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:11.943146   95729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38875
	I1210 00:05:11.943537   95729 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:11.944108   95729 main.go:141] libmachine: Using API Version  1
	I1210 00:05:11.944139   95729 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:11.944497   95729 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:11.944660   95729 main.go:141] libmachine: (functional-551825) Calling .DriverName
	I1210 00:05:11.944891   95729 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:05:11.945188   95729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:05:11.945233   95729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:05:11.960171   95729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40845
	I1210 00:05:11.960553   95729 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:05:11.961093   95729 main.go:141] libmachine: Using API Version  1
	I1210 00:05:11.961120   95729 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:05:11.961472   95729 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:05:11.961654   95729 main.go:141] libmachine: (functional-551825) Calling .DriverName
	I1210 00:05:11.994402   95729 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1210 00:05:11.995524   95729 start.go:297] selected driver: kvm2
	I1210 00:05:11.995536   95729 start.go:901] validating driver "kvm2" against &{Name:functional-551825 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-551825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.69 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1210 00:05:11.995640   95729 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:05:11.997549   95729 out.go:201] 
	W1210 00:05:11.998680   95729 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1210 00:05:11.999855   95729 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-551825 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-551825 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-5vhcs" [15253faa-e3fb-4a65-b455-cd75848d31b2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-5vhcs" [15253faa-e3fb-4a65-b455-cd75848d31b2] Running
2024/12/10 00:05:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004265157s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.69:30542
functional_test.go:1675: http://192.168.39.69:30542: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-5vhcs

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.69:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.69:30542
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.44s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [cc0327a0-d10c-42b5-8952-ef55728d72a4] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004695586s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-551825 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-551825 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-551825 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-551825 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9c19e09f-bfe5-4fd1-82ca-cdbda7660820] Pending
helpers_test.go:344: "sp-pod" [9c19e09f-bfe5-4fd1-82ca-cdbda7660820] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9c19e09f-bfe5-4fd1-82ca-cdbda7660820] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.003970081s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-551825 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-551825 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-551825 delete -f testdata/storage-provisioner/pod.yaml: (5.505486274s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-551825 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7011a3cd-ee92-4add-92e3-8633d1516c62] Pending
helpers_test.go:344: "sp-pod" [7011a3cd-ee92-4add-92e3-8633d1516c62] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7011a3cd-ee92-4add-92e3-8633d1516c62] Running
E1210 00:05:47.491744   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003972898s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-551825 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.27s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh -n functional-551825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 cp functional-551825:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd648206357/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh -n functional-551825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh -n functional-551825 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-551825 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-szvrf" [8835fde8-6c04-4aa1-b028-9e9020d82f17] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-szvrf" [8835fde8-6c04-4aa1-b028-9e9020d82f17] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.003537727s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-551825 exec mysql-6cdb49bbb-szvrf -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-551825 exec mysql-6cdb49bbb-szvrf -- mysql -ppassword -e "show databases;": exit status 1 (118.565974ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 00:05:50.348540   86296 retry.go:31] will retry after 1.231688042s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-551825 exec mysql-6cdb49bbb-szvrf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.66s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/86296/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh "sudo cat /etc/test/nested/copy/86296/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/86296.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh "sudo cat /etc/ssl/certs/86296.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/86296.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh "sudo cat /usr/share/ca-certificates/86296.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/862962.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh "sudo cat /etc/ssl/certs/862962.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/862962.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh "sudo cat /usr/share/ca-certificates/862962.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-551825 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-551825 ssh "sudo systemctl is-active docker": exit status 1 (197.033755ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-551825 ssh "sudo systemctl is-active containerd": exit status 1 (193.180107ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-551825 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-551825 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-j959r" [6413aa03-bb20-476d-a627-69bad49cb2ee] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-j959r" [6413aa03-bb20-476d-a627-69bad49cb2ee] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003516212s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.17s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "330.41332ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "54.647204ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "346.116689ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "67.290265ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-551825 /tmp/TestFunctionalparallelMountCmdany-port294922660/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733789110666115884" to /tmp/TestFunctionalparallelMountCmdany-port294922660/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733789110666115884" to /tmp/TestFunctionalparallelMountCmdany-port294922660/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733789110666115884" to /tmp/TestFunctionalparallelMountCmdany-port294922660/001/test-1733789110666115884
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-551825 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (273.342015ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 00:05:10.939800   86296 retry.go:31] will retry after 369.298285ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 10 00:05 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 10 00:05 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 10 00:05 test-1733789110666115884
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh cat /mount-9p/test-1733789110666115884
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-551825 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [46bde821-5e81-4804-bcd2-fc6308ae23fc] Pending
helpers_test.go:344: "busybox-mount" [46bde821-5e81-4804-bcd2-fc6308ae23fc] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [46bde821-5e81-4804-bcd2-fc6308ae23fc] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [46bde821-5e81-4804-bcd2-fc6308ae23fc] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004310505s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-551825 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-551825 /tmp/TestFunctionalparallelMountCmdany-port294922660/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-551825 /tmp/TestFunctionalparallelMountCmdspecific-port3199181082/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-551825 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (261.925455ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 00:05:19.472997   86296 retry.go:31] will retry after 477.554633ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-551825 /tmp/TestFunctionalparallelMountCmdspecific-port3199181082/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-551825 ssh "sudo umount -f /mount-9p": exit status 1 (245.846771ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-551825 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-551825 /tmp/TestFunctionalparallelMountCmdspecific-port3199181082/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 service list -o json
functional_test.go:1494: Took "561.57492ms" to run "out/minikube-linux-amd64 -p functional-551825 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-551825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1312278116/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-551825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1312278116/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-551825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1312278116/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-551825 ssh "findmnt -T" /mount1: exit status 1 (356.772463ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1210 00:05:21.408548   86296 retry.go:31] will retry after 360.052972ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-551825 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-551825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1312278116/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-551825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1312278116/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-551825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1312278116/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.69:30611
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.69:30611
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-551825 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-551825
localhost/kicbase/echo-server:functional-551825
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-551825 image ls --format short --alsologtostderr:
I1210 00:05:33.763263   97533 out.go:345] Setting OutFile to fd 1 ...
I1210 00:05:33.763384   97533 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1210 00:05:33.763397   97533 out.go:358] Setting ErrFile to fd 2...
I1210 00:05:33.763404   97533 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1210 00:05:33.763708   97533 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
I1210 00:05:33.764443   97533 config.go:182] Loaded profile config "functional-551825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1210 00:05:33.764598   97533 config.go:182] Loaded profile config "functional-551825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1210 00:05:33.765157   97533 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1210 00:05:33.765204   97533 main.go:141] libmachine: Launching plugin server for driver kvm2
I1210 00:05:33.780885   97533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44829
I1210 00:05:33.781441   97533 main.go:141] libmachine: () Calling .GetVersion
I1210 00:05:33.781958   97533 main.go:141] libmachine: Using API Version  1
I1210 00:05:33.781970   97533 main.go:141] libmachine: () Calling .SetConfigRaw
I1210 00:05:33.782448   97533 main.go:141] libmachine: () Calling .GetMachineName
I1210 00:05:33.782637   97533 main.go:141] libmachine: (functional-551825) Calling .GetState
I1210 00:05:33.784420   97533 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1210 00:05:33.784449   97533 main.go:141] libmachine: Launching plugin server for driver kvm2
I1210 00:05:33.803311   97533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35249
I1210 00:05:33.803790   97533 main.go:141] libmachine: () Calling .GetVersion
I1210 00:05:33.804242   97533 main.go:141] libmachine: Using API Version  1
I1210 00:05:33.804262   97533 main.go:141] libmachine: () Calling .SetConfigRaw
I1210 00:05:33.804545   97533 main.go:141] libmachine: () Calling .GetMachineName
I1210 00:05:33.804734   97533 main.go:141] libmachine: (functional-551825) Calling .DriverName
I1210 00:05:33.804929   97533 ssh_runner.go:195] Run: systemctl --version
I1210 00:05:33.804953   97533 main.go:141] libmachine: (functional-551825) Calling .GetSSHHostname
I1210 00:05:33.807813   97533 main.go:141] libmachine: (functional-551825) DBG | domain functional-551825 has defined MAC address 52:54:00:dd:ee:5e in network mk-functional-551825
I1210 00:05:33.808239   97533 main.go:141] libmachine: (functional-551825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ee:5e", ip: ""} in network mk-functional-551825: {Iface:virbr1 ExpiryTime:2024-12-10 00:56:12 +0000 UTC Type:0 Mac:52:54:00:dd:ee:5e Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:functional-551825 Clientid:01:52:54:00:dd:ee:5e}
I1210 00:05:33.808260   97533 main.go:141] libmachine: (functional-551825) DBG | domain functional-551825 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:ee:5e in network mk-functional-551825
I1210 00:05:33.812474   97533 main.go:141] libmachine: (functional-551825) Calling .GetSSHPort
I1210 00:05:33.812614   97533 main.go:141] libmachine: (functional-551825) Calling .GetSSHKeyPath
I1210 00:05:33.812872   97533 main.go:141] libmachine: (functional-551825) Calling .GetSSHUsername
I1210 00:05:33.813023   97533 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/functional-551825/id_rsa Username:docker}
I1210 00:05:33.914250   97533 ssh_runner.go:195] Run: sudo crictl images --output json
I1210 00:05:34.197525   97533 main.go:141] libmachine: Making call to close driver server
I1210 00:05:34.197545   97533 main.go:141] libmachine: (functional-551825) Calling .Close
I1210 00:05:34.197809   97533 main.go:141] libmachine: Successfully made call to close driver server
I1210 00:05:34.197829   97533 main.go:141] libmachine: Making call to close connection to plugin binary
I1210 00:05:34.197919   97533 main.go:141] libmachine: Making call to close driver server
I1210 00:05:34.197929   97533 main.go:141] libmachine: (functional-551825) Calling .Close
I1210 00:05:34.198201   97533 main.go:141] libmachine: Successfully made call to close driver server
I1210 00:05:34.198217   97533 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-551825 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
| docker.io/library/nginx                 | latest             | 66f8bdd3810c9 | 196MB  |
| localhost/kicbase/echo-server           | functional-551825  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-551825  | a591cc97d7f08 | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-551825 image ls --format table --alsologtostderr:
I1210 00:05:34.874171   97671 out.go:345] Setting OutFile to fd 1 ...
I1210 00:05:34.874290   97671 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1210 00:05:34.874302   97671 out.go:358] Setting ErrFile to fd 2...
I1210 00:05:34.874310   97671 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1210 00:05:34.874521   97671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
I1210 00:05:34.875153   97671 config.go:182] Loaded profile config "functional-551825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1210 00:05:34.875255   97671 config.go:182] Loaded profile config "functional-551825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1210 00:05:34.875683   97671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1210 00:05:34.875725   97671 main.go:141] libmachine: Launching plugin server for driver kvm2
I1210 00:05:34.891108   97671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35799
I1210 00:05:34.891705   97671 main.go:141] libmachine: () Calling .GetVersion
I1210 00:05:34.892376   97671 main.go:141] libmachine: Using API Version  1
I1210 00:05:34.892404   97671 main.go:141] libmachine: () Calling .SetConfigRaw
I1210 00:05:34.892776   97671 main.go:141] libmachine: () Calling .GetMachineName
I1210 00:05:34.892980   97671 main.go:141] libmachine: (functional-551825) Calling .GetState
I1210 00:05:34.895137   97671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1210 00:05:34.895190   97671 main.go:141] libmachine: Launching plugin server for driver kvm2
I1210 00:05:34.911066   97671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38051
I1210 00:05:34.911408   97671 main.go:141] libmachine: () Calling .GetVersion
I1210 00:05:34.911830   97671 main.go:141] libmachine: Using API Version  1
I1210 00:05:34.911853   97671 main.go:141] libmachine: () Calling .SetConfigRaw
I1210 00:05:34.912174   97671 main.go:141] libmachine: () Calling .GetMachineName
I1210 00:05:34.912372   97671 main.go:141] libmachine: (functional-551825) Calling .DriverName
I1210 00:05:34.912585   97671 ssh_runner.go:195] Run: systemctl --version
I1210 00:05:34.912610   97671 main.go:141] libmachine: (functional-551825) Calling .GetSSHHostname
I1210 00:05:34.915760   97671 main.go:141] libmachine: (functional-551825) DBG | domain functional-551825 has defined MAC address 52:54:00:dd:ee:5e in network mk-functional-551825
I1210 00:05:34.916219   97671 main.go:141] libmachine: (functional-551825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ee:5e", ip: ""} in network mk-functional-551825: {Iface:virbr1 ExpiryTime:2024-12-10 00:56:12 +0000 UTC Type:0 Mac:52:54:00:dd:ee:5e Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:functional-551825 Clientid:01:52:54:00:dd:ee:5e}
I1210 00:05:34.916244   97671 main.go:141] libmachine: (functional-551825) DBG | domain functional-551825 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:ee:5e in network mk-functional-551825
I1210 00:05:34.916424   97671 main.go:141] libmachine: (functional-551825) Calling .GetSSHPort
I1210 00:05:34.916596   97671 main.go:141] libmachine: (functional-551825) Calling .GetSSHKeyPath
I1210 00:05:34.916789   97671 main.go:141] libmachine: (functional-551825) Calling .GetSSHUsername
I1210 00:05:34.916924   97671 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/functional-551825/id_rsa Username:docker}
I1210 00:05:35.064438   97671 ssh_runner.go:195] Run: sudo crictl images --output json
I1210 00:05:35.365318   97671 main.go:141] libmachine: Making call to close driver server
I1210 00:05:35.365334   97671 main.go:141] libmachine: (functional-551825) Calling .Close
I1210 00:05:35.365622   97671 main.go:141] libmachine: Successfully made call to close driver server
I1210 00:05:35.365647   97671 main.go:141] libmachine: Making call to close connection to plugin binary
I1210 00:05:35.365653   97671 main.go:141] libmachine: (functional-551825) DBG | Closing plugin on server side
I1210 00:05:35.365664   97671 main.go:141] libmachine: Making call to close driver server
I1210 00:05:35.365687   97671 main.go:141] libmachine: (functional-551825) Calling .Close
I1210 00:05:35.365963   97671 main.go:141] libmachine: (functional-551825) DBG | Closing plugin on server side
I1210 00:05:35.365966   97671 main.go:141] libmachine: Successfully made call to close driver server
I1210 00:05:35.366023   97671 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-551825 image ls --format json --alsologtostderr:
[{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"a591cc97d7f082a5b520cdae7fb9be0f483d43aa7efa7e3daba6b9529c8b2310","repoDigests":["localhost/minikube-local-cache-test@sha256:2ec2ddaa5e6796653ae5c621e72efe0c744b5ac22eb5e58ba7a9d890dbd7ad9f"],"repoTags":["localhost/minikube-local-cache-test:functional-551825"],"size":"3328"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b63
27e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"],"repoTags":["registry.k8s.io/
kube-scheduler:v1.31.2"],"size":"68457798"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b25
1e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e","repoDigests":["docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42","docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be"],"repoTags":["docker.io/library/nginx:latest"],"size":"195919252"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["
gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicba
se/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-551825"],"size":"4943877"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0","registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pa
use@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-551825 image ls --format json --alsologtostderr:
I1210 00:05:34.376309   97609 out.go:345] Setting OutFile to fd 1 ...
I1210 00:05:34.376443   97609 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1210 00:05:34.376456   97609 out.go:358] Setting ErrFile to fd 2...
I1210 00:05:34.376462   97609 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1210 00:05:34.376733   97609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
I1210 00:05:34.377651   97609 config.go:182] Loaded profile config "functional-551825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1210 00:05:34.377894   97609 config.go:182] Loaded profile config "functional-551825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1210 00:05:34.378669   97609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1210 00:05:34.378723   97609 main.go:141] libmachine: Launching plugin server for driver kvm2
I1210 00:05:34.394526   97609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33725
I1210 00:05:34.395110   97609 main.go:141] libmachine: () Calling .GetVersion
I1210 00:05:34.395872   97609 main.go:141] libmachine: Using API Version  1
I1210 00:05:34.395893   97609 main.go:141] libmachine: () Calling .SetConfigRaw
I1210 00:05:34.396309   97609 main.go:141] libmachine: () Calling .GetMachineName
I1210 00:05:34.396523   97609 main.go:141] libmachine: (functional-551825) Calling .GetState
I1210 00:05:34.398691   97609 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1210 00:05:34.398740   97609 main.go:141] libmachine: Launching plugin server for driver kvm2
I1210 00:05:34.413996   97609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36601
I1210 00:05:34.414388   97609 main.go:141] libmachine: () Calling .GetVersion
I1210 00:05:34.414990   97609 main.go:141] libmachine: Using API Version  1
I1210 00:05:34.415023   97609 main.go:141] libmachine: () Calling .SetConfigRaw
I1210 00:05:34.415349   97609 main.go:141] libmachine: () Calling .GetMachineName
I1210 00:05:34.415579   97609 main.go:141] libmachine: (functional-551825) Calling .DriverName
I1210 00:05:34.415799   97609 ssh_runner.go:195] Run: systemctl --version
I1210 00:05:34.415824   97609 main.go:141] libmachine: (functional-551825) Calling .GetSSHHostname
I1210 00:05:34.418591   97609 main.go:141] libmachine: (functional-551825) DBG | domain functional-551825 has defined MAC address 52:54:00:dd:ee:5e in network mk-functional-551825
I1210 00:05:34.418945   97609 main.go:141] libmachine: (functional-551825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ee:5e", ip: ""} in network mk-functional-551825: {Iface:virbr1 ExpiryTime:2024-12-10 00:56:12 +0000 UTC Type:0 Mac:52:54:00:dd:ee:5e Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:functional-551825 Clientid:01:52:54:00:dd:ee:5e}
I1210 00:05:34.418974   97609 main.go:141] libmachine: (functional-551825) DBG | domain functional-551825 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:ee:5e in network mk-functional-551825
I1210 00:05:34.419071   97609 main.go:141] libmachine: (functional-551825) Calling .GetSSHPort
I1210 00:05:34.419232   97609 main.go:141] libmachine: (functional-551825) Calling .GetSSHKeyPath
I1210 00:05:34.419379   97609 main.go:141] libmachine: (functional-551825) Calling .GetSSHUsername
I1210 00:05:34.419543   97609 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/functional-551825/id_rsa Username:docker}
I1210 00:05:34.545316   97609 ssh_runner.go:195] Run: sudo crictl images --output json
I1210 00:05:34.815463   97609 main.go:141] libmachine: Making call to close driver server
I1210 00:05:34.815489   97609 main.go:141] libmachine: (functional-551825) Calling .Close
I1210 00:05:34.815818   97609 main.go:141] libmachine: (functional-551825) DBG | Closing plugin on server side
I1210 00:05:34.815888   97609 main.go:141] libmachine: Successfully made call to close driver server
I1210 00:05:34.815907   97609 main.go:141] libmachine: Making call to close connection to plugin binary
I1210 00:05:34.815922   97609 main.go:141] libmachine: Making call to close driver server
I1210 00:05:34.815938   97609 main.go:141] libmachine: (functional-551825) Calling .Close
I1210 00:05:34.816248   97609 main.go:141] libmachine: Successfully made call to close driver server
I1210 00:05:34.816265   97609 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-551825 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e
repoDigests:
- docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42
- docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be
repoTags:
- docker.io/library/nginx:latest
size: "195919252"
- id: a591cc97d7f082a5b520cdae7fb9be0f483d43aa7efa7e3daba6b9529c8b2310
repoDigests:
- localhost/minikube-local-cache-test@sha256:2ec2ddaa5e6796653ae5c621e72efe0c744b5ac22eb5e58ba7a9d890dbd7ad9f
repoTags:
- localhost/minikube-local-cache-test:functional-551825
size: "3328"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-551825
size: "4943877"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-551825 image ls --format yaml --alsologtostderr:
I1210 00:05:33.762543   97534 out.go:345] Setting OutFile to fd 1 ...
I1210 00:05:33.762685   97534 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1210 00:05:33.762697   97534 out.go:358] Setting ErrFile to fd 2...
I1210 00:05:33.762704   97534 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1210 00:05:33.762959   97534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
I1210 00:05:33.763782   97534 config.go:182] Loaded profile config "functional-551825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1210 00:05:33.763946   97534 config.go:182] Loaded profile config "functional-551825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1210 00:05:33.764479   97534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1210 00:05:33.764544   97534 main.go:141] libmachine: Launching plugin server for driver kvm2
I1210 00:05:33.779376   97534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
I1210 00:05:33.779973   97534 main.go:141] libmachine: () Calling .GetVersion
I1210 00:05:33.780673   97534 main.go:141] libmachine: Using API Version  1
I1210 00:05:33.780702   97534 main.go:141] libmachine: () Calling .SetConfigRaw
I1210 00:05:33.781110   97534 main.go:141] libmachine: () Calling .GetMachineName
I1210 00:05:33.781329   97534 main.go:141] libmachine: (functional-551825) Calling .GetState
I1210 00:05:33.783288   97534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1210 00:05:33.783338   97534 main.go:141] libmachine: Launching plugin server for driver kvm2
I1210 00:05:33.803951   97534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41219
I1210 00:05:33.804860   97534 main.go:141] libmachine: () Calling .GetVersion
I1210 00:05:33.805359   97534 main.go:141] libmachine: Using API Version  1
I1210 00:05:33.805370   97534 main.go:141] libmachine: () Calling .SetConfigRaw
I1210 00:05:33.805701   97534 main.go:141] libmachine: () Calling .GetMachineName
I1210 00:05:33.805869   97534 main.go:141] libmachine: (functional-551825) Calling .DriverName
I1210 00:05:33.806062   97534 ssh_runner.go:195] Run: systemctl --version
I1210 00:05:33.806111   97534 main.go:141] libmachine: (functional-551825) Calling .GetSSHHostname
I1210 00:05:33.812509   97534 main.go:141] libmachine: (functional-551825) Calling .GetSSHPort
I1210 00:05:33.812587   97534 main.go:141] libmachine: (functional-551825) DBG | domain functional-551825 has defined MAC address 52:54:00:dd:ee:5e in network mk-functional-551825
I1210 00:05:33.812615   97534 main.go:141] libmachine: (functional-551825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ee:5e", ip: ""} in network mk-functional-551825: {Iface:virbr1 ExpiryTime:2024-12-10 00:56:12 +0000 UTC Type:0 Mac:52:54:00:dd:ee:5e Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:functional-551825 Clientid:01:52:54:00:dd:ee:5e}
I1210 00:05:33.812638   97534 main.go:141] libmachine: (functional-551825) DBG | domain functional-551825 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:ee:5e in network mk-functional-551825
I1210 00:05:33.812663   97534 main.go:141] libmachine: (functional-551825) Calling .GetSSHKeyPath
I1210 00:05:33.812798   97534 main.go:141] libmachine: (functional-551825) Calling .GetSSHUsername
I1210 00:05:33.813020   97534 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/functional-551825/id_rsa Username:docker}
I1210 00:05:33.945564   97534 ssh_runner.go:195] Run: sudo crictl images --output json
I1210 00:05:34.292717   97534 main.go:141] libmachine: Making call to close driver server
I1210 00:05:34.292734   97534 main.go:141] libmachine: (functional-551825) Calling .Close
I1210 00:05:34.292987   97534 main.go:141] libmachine: (functional-551825) DBG | Closing plugin on server side
I1210 00:05:34.292991   97534 main.go:141] libmachine: Successfully made call to close driver server
I1210 00:05:34.293025   97534 main.go:141] libmachine: Making call to close connection to plugin binary
I1210 00:05:34.293034   97534 main.go:141] libmachine: Making call to close driver server
I1210 00:05:34.293048   97534 main.go:141] libmachine: (functional-551825) Calling .Close
I1210 00:05:34.293355   97534 main.go:141] libmachine: Successfully made call to close driver server
I1210 00:05:34.293374   97534 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-551825 ssh pgrep buildkitd: exit status 1 (257.270991ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 image build -t localhost/my-image:functional-551825 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-551825 image build -t localhost/my-image:functional-551825 testdata/build --alsologtostderr: (9.723345322s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-551825 image build -t localhost/my-image:functional-551825 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ecb6bfc86b5
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-551825
--> b429d346f8e
Successfully tagged localhost/my-image:functional-551825
b429d346f8e96481f2b6b4a4ef75c33c4082e460b2adb9b2e8672bdf40def139
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-551825 image build -t localhost/my-image:functional-551825 testdata/build --alsologtostderr:
I1210 00:05:34.514608   97632 out.go:345] Setting OutFile to fd 1 ...
I1210 00:05:34.514736   97632 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1210 00:05:34.514746   97632 out.go:358] Setting ErrFile to fd 2...
I1210 00:05:34.514754   97632 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1210 00:05:34.514954   97632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
I1210 00:05:34.515652   97632 config.go:182] Loaded profile config "functional-551825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1210 00:05:34.516324   97632 config.go:182] Loaded profile config "functional-551825": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1210 00:05:34.516811   97632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1210 00:05:34.516884   97632 main.go:141] libmachine: Launching plugin server for driver kvm2
I1210 00:05:34.533316   97632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46443
I1210 00:05:34.533792   97632 main.go:141] libmachine: () Calling .GetVersion
I1210 00:05:34.534512   97632 main.go:141] libmachine: Using API Version  1
I1210 00:05:34.534546   97632 main.go:141] libmachine: () Calling .SetConfigRaw
I1210 00:05:34.534959   97632 main.go:141] libmachine: () Calling .GetMachineName
I1210 00:05:34.535196   97632 main.go:141] libmachine: (functional-551825) Calling .GetState
I1210 00:05:34.537417   97632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1210 00:05:34.537467   97632 main.go:141] libmachine: Launching plugin server for driver kvm2
I1210 00:05:34.553512   97632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39747
I1210 00:05:34.554012   97632 main.go:141] libmachine: () Calling .GetVersion
I1210 00:05:34.554645   97632 main.go:141] libmachine: Using API Version  1
I1210 00:05:34.554681   97632 main.go:141] libmachine: () Calling .SetConfigRaw
I1210 00:05:34.555047   97632 main.go:141] libmachine: () Calling .GetMachineName
I1210 00:05:34.555255   97632 main.go:141] libmachine: (functional-551825) Calling .DriverName
I1210 00:05:34.555448   97632 ssh_runner.go:195] Run: systemctl --version
I1210 00:05:34.555482   97632 main.go:141] libmachine: (functional-551825) Calling .GetSSHHostname
I1210 00:05:34.558305   97632 main.go:141] libmachine: (functional-551825) DBG | domain functional-551825 has defined MAC address 52:54:00:dd:ee:5e in network mk-functional-551825
I1210 00:05:34.558777   97632 main.go:141] libmachine: (functional-551825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ee:5e", ip: ""} in network mk-functional-551825: {Iface:virbr1 ExpiryTime:2024-12-10 00:56:12 +0000 UTC Type:0 Mac:52:54:00:dd:ee:5e Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:functional-551825 Clientid:01:52:54:00:dd:ee:5e}
I1210 00:05:34.558814   97632 main.go:141] libmachine: (functional-551825) DBG | domain functional-551825 has defined IP address 192.168.39.69 and MAC address 52:54:00:dd:ee:5e in network mk-functional-551825
I1210 00:05:34.558964   97632 main.go:141] libmachine: (functional-551825) Calling .GetSSHPort
I1210 00:05:34.559107   97632 main.go:141] libmachine: (functional-551825) Calling .GetSSHKeyPath
I1210 00:05:34.559274   97632 main.go:141] libmachine: (functional-551825) Calling .GetSSHUsername
I1210 00:05:34.559408   97632 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/functional-551825/id_rsa Username:docker}
I1210 00:05:34.677964   97632 build_images.go:161] Building image from path: /tmp/build.302493674.tar
I1210 00:05:34.678049   97632 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1210 00:05:34.694786   97632 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.302493674.tar
I1210 00:05:34.710137   97632 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.302493674.tar: stat -c "%s %y" /var/lib/minikube/build/build.302493674.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.302493674.tar': No such file or directory
I1210 00:05:34.710179   97632 ssh_runner.go:362] scp /tmp/build.302493674.tar --> /var/lib/minikube/build/build.302493674.tar (3072 bytes)
I1210 00:05:34.775057   97632 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.302493674
I1210 00:05:34.796130   97632 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.302493674 -xf /var/lib/minikube/build/build.302493674.tar
I1210 00:05:34.817667   97632 crio.go:315] Building image: /var/lib/minikube/build/build.302493674
I1210 00:05:34.817751   97632 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-551825 /var/lib/minikube/build/build.302493674 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1210 00:05:44.159565   97632 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-551825 /var/lib/minikube/build/build.302493674 --cgroup-manager=cgroupfs: (9.341786009s)
I1210 00:05:44.159640   97632 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.302493674
I1210 00:05:44.169420   97632 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.302493674.tar
I1210 00:05:44.178340   97632 build_images.go:217] Built localhost/my-image:functional-551825 from /tmp/build.302493674.tar
I1210 00:05:44.178390   97632 build_images.go:133] succeeded building to: functional-551825
I1210 00:05:44.178396   97632 build_images.go:134] failed building to: 
I1210 00:05:44.178430   97632 main.go:141] libmachine: Making call to close driver server
I1210 00:05:44.178465   97632 main.go:141] libmachine: (functional-551825) Calling .Close
I1210 00:05:44.178876   97632 main.go:141] libmachine: Successfully made call to close driver server
I1210 00:05:44.178896   97632 main.go:141] libmachine: Making call to close connection to plugin binary
I1210 00:05:44.178906   97632 main.go:141] libmachine: Making call to close driver server
I1210 00:05:44.178915   97632 main.go:141] libmachine: (functional-551825) Calling .Close
I1210 00:05:44.179189   97632 main.go:141] libmachine: (functional-551825) DBG | Closing plugin on server side
I1210 00:05:44.179226   97632 main.go:141] libmachine: Successfully made call to close driver server
I1210 00:05:44.179234   97632 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.626178594s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-551825
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 image load --daemon kicbase/echo-server:functional-551825 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-551825 image load --daemon kicbase/echo-server:functional-551825 --alsologtostderr: (1.387893206s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 image load --daemon kicbase/echo-server:functional-551825 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-551825
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 image load --daemon kicbase/echo-server:functional-551825 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 image save kicbase/echo-server:functional-551825 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-551825 image save kicbase/echo-server:functional-551825 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (3.024735193s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 image rm kicbase/echo-server:functional-551825 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-551825
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-551825 image save --daemon kicbase/echo-server:functional-551825 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-551825
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-551825
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-551825
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-551825
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (199.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-070032 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-070032 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m18.899701504s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (199.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-070032 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-070032 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-070032 -- rollout status deployment/busybox: (3.577223451s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-070032 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-070032 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-070032 -- exec busybox-7dff88458-7gbz8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-070032 -- exec busybox-7dff88458-d682h -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-070032 -- exec busybox-7dff88458-pw24w -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-070032 -- exec busybox-7dff88458-7gbz8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-070032 -- exec busybox-7dff88458-d682h -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-070032 -- exec busybox-7dff88458-pw24w -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-070032 -- exec busybox-7dff88458-7gbz8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-070032 -- exec busybox-7dff88458-d682h -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-070032 -- exec busybox-7dff88458-pw24w -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-070032 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-070032 -- exec busybox-7dff88458-7gbz8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-070032 -- exec busybox-7dff88458-7gbz8 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-070032 -- exec busybox-7dff88458-d682h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-070032 -- exec busybox-7dff88458-d682h -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-070032 -- exec busybox-7dff88458-pw24w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-070032 -- exec busybox-7dff88458-pw24w -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-070032 -v=7 --alsologtostderr
E1210 00:10:09.289598   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:10:09.295965   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:10:09.307313   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:10:09.328699   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:10:09.370115   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:10:09.451550   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:10:09.613619   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:10:09.935812   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:10:10.577280   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:10:11.859089   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:10:14.420790   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-070032 -v=7 --alsologtostderr: (55.680740547s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-070032 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 cp testdata/cp-test.txt ha-070032:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 cp ha-070032:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3617736836/001/cp-test_ha-070032.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 cp ha-070032:/home/docker/cp-test.txt ha-070032-m02:/home/docker/cp-test_ha-070032_ha-070032-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032-m02 "sudo cat /home/docker/cp-test_ha-070032_ha-070032-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 cp ha-070032:/home/docker/cp-test.txt ha-070032-m03:/home/docker/cp-test_ha-070032_ha-070032-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032-m03 "sudo cat /home/docker/cp-test_ha-070032_ha-070032-m03.txt"
E1210 00:10:19.543087   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 cp ha-070032:/home/docker/cp-test.txt ha-070032-m04:/home/docker/cp-test_ha-070032_ha-070032-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032-m04 "sudo cat /home/docker/cp-test_ha-070032_ha-070032-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 cp testdata/cp-test.txt ha-070032-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 cp ha-070032-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3617736836/001/cp-test_ha-070032-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 cp ha-070032-m02:/home/docker/cp-test.txt ha-070032:/home/docker/cp-test_ha-070032-m02_ha-070032.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032 "sudo cat /home/docker/cp-test_ha-070032-m02_ha-070032.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 cp ha-070032-m02:/home/docker/cp-test.txt ha-070032-m03:/home/docker/cp-test_ha-070032-m02_ha-070032-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032-m03 "sudo cat /home/docker/cp-test_ha-070032-m02_ha-070032-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 cp ha-070032-m02:/home/docker/cp-test.txt ha-070032-m04:/home/docker/cp-test_ha-070032-m02_ha-070032-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032-m04 "sudo cat /home/docker/cp-test_ha-070032-m02_ha-070032-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 cp testdata/cp-test.txt ha-070032-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 cp ha-070032-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3617736836/001/cp-test_ha-070032-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 cp ha-070032-m03:/home/docker/cp-test.txt ha-070032:/home/docker/cp-test_ha-070032-m03_ha-070032.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032 "sudo cat /home/docker/cp-test_ha-070032-m03_ha-070032.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 cp ha-070032-m03:/home/docker/cp-test.txt ha-070032-m02:/home/docker/cp-test_ha-070032-m03_ha-070032-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032-m02 "sudo cat /home/docker/cp-test_ha-070032-m03_ha-070032-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 cp ha-070032-m03:/home/docker/cp-test.txt ha-070032-m04:/home/docker/cp-test_ha-070032-m03_ha-070032-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032-m04 "sudo cat /home/docker/cp-test_ha-070032-m03_ha-070032-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 cp testdata/cp-test.txt ha-070032-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3617736836/001/cp-test_ha-070032-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt ha-070032:/home/docker/cp-test_ha-070032-m04_ha-070032.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032 "sudo cat /home/docker/cp-test_ha-070032-m04_ha-070032.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt ha-070032-m02:/home/docker/cp-test_ha-070032-m04_ha-070032-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032-m02 "sudo cat /home/docker/cp-test_ha-070032-m04_ha-070032-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 cp ha-070032-m04:/home/docker/cp-test.txt ha-070032-m03:/home/docker/cp-test_ha-070032-m04_ha-070032-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 ssh -n ha-070032-m03 "sudo cat /home/docker/cp-test_ha-070032-m04_ha-070032-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-070032 node delete m03 -v=7 --alsologtostderr: (15.889348332s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (314.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-070032 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1210 00:25:09.289957   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:25:47.490837   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:26:32.354916   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-070032 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m13.657658008s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (314.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-070032 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-070032 --control-plane -v=7 --alsologtostderr: (1m11.380194482s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-070032 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.81s)

                                                
                                    
x
+
TestJSONOutput/start/Command (53.37s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-298571 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1210 00:28:50.562131   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-298571 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (53.365854148s)
--- PASS: TestJSONOutput/start/Command (53.37s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-298571 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-298571 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-298571 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-298571 --output=json --user=testUser: (7.358622997s)
--- PASS: TestJSONOutput/stop/Command (7.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-852651 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-852651 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.525527ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c3491ce6-a566-4cf2-86e1-3a99384dd0c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-852651] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"97a61fc8-d54f-484f-9e42-75499b483701","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20062"}}
	{"specversion":"1.0","id":"dcb57d43-72c8-4614-aa11-6d6809037db3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"befa00c0-cf3a-4c11-a4e4-a1c3bd73adfa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig"}}
	{"specversion":"1.0","id":"59237e49-0862-4f47-a355-5b0be067ac85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube"}}
	{"specversion":"1.0","id":"e8590cae-b1c6-4ca1-8a20-089de4403ca4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"4c4175eb-c6d6-4cdc-ab76-12acceedfa4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8c43b35c-b58d-4e7e-8d37-b3302ce6b737","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-852651" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-852651
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (84.18s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-535848 --driver=kvm2  --container-runtime=crio
E1210 00:30:09.289581   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-535848 --driver=kvm2  --container-runtime=crio: (38.429157891s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-548096 --driver=kvm2  --container-runtime=crio
E1210 00:30:47.490778   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-548096 --driver=kvm2  --container-runtime=crio: (42.723657606s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-535848
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-548096
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-548096" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-548096
helpers_test.go:175: Cleaning up "first-535848" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-535848
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-535848: (1.003488014s)
--- PASS: TestMinikubeProfile (84.18s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-793848 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-793848 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.025895957s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-793848 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-793848 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-806345 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-806345 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.821158452s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-806345 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-806345 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-793848 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-806345 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-806345 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-806345
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-806345: (1.269733199s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.75s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-806345
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-806345: (21.745180145s)
--- PASS: TestMountStart/serial/RestartStopped (22.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-806345 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-806345 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (112.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-029725 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-029725 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m52.157841695s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (112.56s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029725 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029725 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-029725 -- rollout status deployment/busybox: (3.160811762s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029725 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029725 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029725 -- exec busybox-7dff88458-qwt4p -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029725 -- exec busybox-7dff88458-rm5jj -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029725 -- exec busybox-7dff88458-qwt4p -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029725 -- exec busybox-7dff88458-rm5jj -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029725 -- exec busybox-7dff88458-qwt4p -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029725 -- exec busybox-7dff88458-rm5jj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.57s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029725 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029725 -- exec busybox-7dff88458-qwt4p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029725 -- exec busybox-7dff88458-qwt4p -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029725 -- exec busybox-7dff88458-rm5jj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-029725 -- exec busybox-7dff88458-rm5jj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-029725 -v 3 --alsologtostderr
E1210 00:35:09.288944   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-029725 -v 3 --alsologtostderr: (50.585206344s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.13s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-029725 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.55s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 cp testdata/cp-test.txt multinode-029725:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 ssh -n multinode-029725 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 cp multinode-029725:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4291806726/001/cp-test_multinode-029725.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 ssh -n multinode-029725 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 cp multinode-029725:/home/docker/cp-test.txt multinode-029725-m02:/home/docker/cp-test_multinode-029725_multinode-029725-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 ssh -n multinode-029725 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 ssh -n multinode-029725-m02 "sudo cat /home/docker/cp-test_multinode-029725_multinode-029725-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 cp multinode-029725:/home/docker/cp-test.txt multinode-029725-m03:/home/docker/cp-test_multinode-029725_multinode-029725-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 ssh -n multinode-029725 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 ssh -n multinode-029725-m03 "sudo cat /home/docker/cp-test_multinode-029725_multinode-029725-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 cp testdata/cp-test.txt multinode-029725-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 ssh -n multinode-029725-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 cp multinode-029725-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4291806726/001/cp-test_multinode-029725-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 ssh -n multinode-029725-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 cp multinode-029725-m02:/home/docker/cp-test.txt multinode-029725:/home/docker/cp-test_multinode-029725-m02_multinode-029725.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 ssh -n multinode-029725-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 ssh -n multinode-029725 "sudo cat /home/docker/cp-test_multinode-029725-m02_multinode-029725.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 cp multinode-029725-m02:/home/docker/cp-test.txt multinode-029725-m03:/home/docker/cp-test_multinode-029725-m02_multinode-029725-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 ssh -n multinode-029725-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 ssh -n multinode-029725-m03 "sudo cat /home/docker/cp-test_multinode-029725-m02_multinode-029725-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 cp testdata/cp-test.txt multinode-029725-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 ssh -n multinode-029725-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 cp multinode-029725-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4291806726/001/cp-test_multinode-029725-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 ssh -n multinode-029725-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 cp multinode-029725-m03:/home/docker/cp-test.txt multinode-029725:/home/docker/cp-test_multinode-029725-m03_multinode-029725.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 ssh -n multinode-029725-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 ssh -n multinode-029725 "sudo cat /home/docker/cp-test_multinode-029725-m03_multinode-029725.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 cp multinode-029725-m03:/home/docker/cp-test.txt multinode-029725-m02:/home/docker/cp-test_multinode-029725-m03_multinode-029725-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 ssh -n multinode-029725-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 ssh -n multinode-029725-m02 "sudo cat /home/docker/cp-test_multinode-029725-m03_multinode-029725-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.02s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-029725 node stop m03: (1.385476761s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-029725 status: exit status 7 (406.233017ms)

                                                
                                                
-- stdout --
	multinode-029725
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-029725-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-029725-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-029725 status --alsologtostderr: exit status 7 (408.257473ms)

                                                
                                                
-- stdout --
	multinode-029725
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-029725-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-029725-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 00:35:33.517863  115080 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:35:33.517961  115080 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:35:33.517968  115080 out.go:358] Setting ErrFile to fd 2...
	I1210 00:35:33.517973  115080 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:35:33.518136  115080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 00:35:33.518293  115080 out.go:352] Setting JSON to false
	I1210 00:35:33.518321  115080 mustload.go:65] Loading cluster: multinode-029725
	I1210 00:35:33.518423  115080 notify.go:220] Checking for updates...
	I1210 00:35:33.518686  115080 config.go:182] Loaded profile config "multinode-029725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1210 00:35:33.518707  115080 status.go:174] checking status of multinode-029725 ...
	I1210 00:35:33.519084  115080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:35:33.519152  115080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:35:33.540331  115080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38641
	I1210 00:35:33.540811  115080 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:35:33.541587  115080 main.go:141] libmachine: Using API Version  1
	I1210 00:35:33.541618  115080 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:35:33.541931  115080 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:35:33.542130  115080 main.go:141] libmachine: (multinode-029725) Calling .GetState
	I1210 00:35:33.543717  115080 status.go:371] multinode-029725 host status = "Running" (err=<nil>)
	I1210 00:35:33.543735  115080 host.go:66] Checking if "multinode-029725" exists ...
	I1210 00:35:33.543999  115080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:35:33.544033  115080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:35:33.558680  115080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33503
	I1210 00:35:33.559076  115080 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:35:33.559574  115080 main.go:141] libmachine: Using API Version  1
	I1210 00:35:33.559597  115080 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:35:33.559910  115080 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:35:33.560106  115080 main.go:141] libmachine: (multinode-029725) Calling .GetIP
	I1210 00:35:33.562418  115080 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:35:33.562850  115080 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:35:33.562871  115080 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:35:33.562944  115080 host.go:66] Checking if "multinode-029725" exists ...
	I1210 00:35:33.563212  115080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:35:33.563248  115080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:35:33.577987  115080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37837
	I1210 00:35:33.578398  115080 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:35:33.578871  115080 main.go:141] libmachine: Using API Version  1
	I1210 00:35:33.578897  115080 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:35:33.579343  115080 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:35:33.579714  115080 main.go:141] libmachine: (multinode-029725) Calling .DriverName
	I1210 00:35:33.579975  115080 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 00:35:33.580012  115080 main.go:141] libmachine: (multinode-029725) Calling .GetSSHHostname
	I1210 00:35:33.582236  115080 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:35:33.582595  115080 main.go:141] libmachine: (multinode-029725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:60:b3", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:32:49 +0000 UTC Type:0 Mac:52:54:00:a1:60:b3 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-029725 Clientid:01:52:54:00:a1:60:b3}
	I1210 00:35:33.582631  115080 main.go:141] libmachine: (multinode-029725) DBG | domain multinode-029725 has defined IP address 192.168.39.24 and MAC address 52:54:00:a1:60:b3 in network mk-multinode-029725
	I1210 00:35:33.582728  115080 main.go:141] libmachine: (multinode-029725) Calling .GetSSHPort
	I1210 00:35:33.582882  115080 main.go:141] libmachine: (multinode-029725) Calling .GetSSHKeyPath
	I1210 00:35:33.583013  115080 main.go:141] libmachine: (multinode-029725) Calling .GetSSHUsername
	I1210 00:35:33.583152  115080 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/multinode-029725/id_rsa Username:docker}
	I1210 00:35:33.662271  115080 ssh_runner.go:195] Run: systemctl --version
	I1210 00:35:33.668034  115080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:35:33.681168  115080 kubeconfig.go:125] found "multinode-029725" server: "https://192.168.39.24:8443"
	I1210 00:35:33.681196  115080 api_server.go:166] Checking apiserver status ...
	I1210 00:35:33.681234  115080 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1210 00:35:33.693278  115080 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1086/cgroup
	W1210 00:35:33.701892  115080 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1086/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1210 00:35:33.701939  115080 ssh_runner.go:195] Run: ls
	I1210 00:35:33.705795  115080 api_server.go:253] Checking apiserver healthz at https://192.168.39.24:8443/healthz ...
	I1210 00:35:33.710604  115080 api_server.go:279] https://192.168.39.24:8443/healthz returned 200:
	ok
	I1210 00:35:33.710623  115080 status.go:463] multinode-029725 apiserver status = Running (err=<nil>)
	I1210 00:35:33.710632  115080 status.go:176] multinode-029725 status: &{Name:multinode-029725 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1210 00:35:33.710653  115080 status.go:174] checking status of multinode-029725-m02 ...
	I1210 00:35:33.710978  115080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:35:33.711036  115080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:35:33.726629  115080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37567
	I1210 00:35:33.727085  115080 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:35:33.727669  115080 main.go:141] libmachine: Using API Version  1
	I1210 00:35:33.727694  115080 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:35:33.728058  115080 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:35:33.728245  115080 main.go:141] libmachine: (multinode-029725-m02) Calling .GetState
	I1210 00:35:33.729725  115080 status.go:371] multinode-029725-m02 host status = "Running" (err=<nil>)
	I1210 00:35:33.729741  115080 host.go:66] Checking if "multinode-029725-m02" exists ...
	I1210 00:35:33.730053  115080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:35:33.730091  115080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:35:33.744624  115080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40581
	I1210 00:35:33.745024  115080 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:35:33.745463  115080 main.go:141] libmachine: Using API Version  1
	I1210 00:35:33.745481  115080 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:35:33.745822  115080 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:35:33.745992  115080 main.go:141] libmachine: (multinode-029725-m02) Calling .GetIP
	I1210 00:35:33.748424  115080 main.go:141] libmachine: (multinode-029725-m02) DBG | domain multinode-029725-m02 has defined MAC address 52:54:00:76:ef:b4 in network mk-multinode-029725
	I1210 00:35:33.748825  115080 main.go:141] libmachine: (multinode-029725-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:ef:b4", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:33:51 +0000 UTC Type:0 Mac:52:54:00:76:ef:b4 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-029725-m02 Clientid:01:52:54:00:76:ef:b4}
	I1210 00:35:33.748854  115080 main.go:141] libmachine: (multinode-029725-m02) DBG | domain multinode-029725-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:76:ef:b4 in network mk-multinode-029725
	I1210 00:35:33.748990  115080 host.go:66] Checking if "multinode-029725-m02" exists ...
	I1210 00:35:33.749328  115080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:35:33.749364  115080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:35:33.763751  115080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42269
	I1210 00:35:33.764178  115080 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:35:33.764635  115080 main.go:141] libmachine: Using API Version  1
	I1210 00:35:33.764657  115080 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:35:33.764929  115080 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:35:33.765100  115080 main.go:141] libmachine: (multinode-029725-m02) Calling .DriverName
	I1210 00:35:33.765271  115080 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1210 00:35:33.765290  115080 main.go:141] libmachine: (multinode-029725-m02) Calling .GetSSHHostname
	I1210 00:35:33.767665  115080 main.go:141] libmachine: (multinode-029725-m02) DBG | domain multinode-029725-m02 has defined MAC address 52:54:00:76:ef:b4 in network mk-multinode-029725
	I1210 00:35:33.768074  115080 main.go:141] libmachine: (multinode-029725-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:ef:b4", ip: ""} in network mk-multinode-029725: {Iface:virbr1 ExpiryTime:2024-12-10 01:33:51 +0000 UTC Type:0 Mac:52:54:00:76:ef:b4 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:multinode-029725-m02 Clientid:01:52:54:00:76:ef:b4}
	I1210 00:35:33.768102  115080 main.go:141] libmachine: (multinode-029725-m02) DBG | domain multinode-029725-m02 has defined IP address 192.168.39.171 and MAC address 52:54:00:76:ef:b4 in network mk-multinode-029725
	I1210 00:35:33.768272  115080 main.go:141] libmachine: (multinode-029725-m02) Calling .GetSSHPort
	I1210 00:35:33.768430  115080 main.go:141] libmachine: (multinode-029725-m02) Calling .GetSSHKeyPath
	I1210 00:35:33.768569  115080 main.go:141] libmachine: (multinode-029725-m02) Calling .GetSSHUsername
	I1210 00:35:33.768720  115080 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20062-79135/.minikube/machines/multinode-029725-m02/id_rsa Username:docker}
	I1210 00:35:33.848945  115080 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1210 00:35:33.861441  115080 status.go:176] multinode-029725-m02 status: &{Name:multinode-029725-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1210 00:35:33.861481  115080 status.go:174] checking status of multinode-029725-m03 ...
	I1210 00:35:33.861925  115080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1210 00:35:33.861978  115080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1210 00:35:33.877094  115080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42557
	I1210 00:35:33.877501  115080 main.go:141] libmachine: () Calling .GetVersion
	I1210 00:35:33.877994  115080 main.go:141] libmachine: Using API Version  1
	I1210 00:35:33.878016  115080 main.go:141] libmachine: () Calling .SetConfigRaw
	I1210 00:35:33.878348  115080 main.go:141] libmachine: () Calling .GetMachineName
	I1210 00:35:33.878527  115080 main.go:141] libmachine: (multinode-029725-m03) Calling .GetState
	I1210 00:35:33.880107  115080 status.go:371] multinode-029725-m03 host status = "Stopped" (err=<nil>)
	I1210 00:35:33.880123  115080 status.go:384] host is not running, skipping remaining checks
	I1210 00:35:33.880128  115080 status.go:176] multinode-029725-m03 status: &{Name:multinode-029725-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.20s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 node start m03 -v=7 --alsologtostderr
E1210 00:35:47.491401   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-029725 node start m03 -v=7 --alsologtostderr: (37.818530635s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-029725 node delete m03: (1.749602498s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (178.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-029725 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1210 00:45:09.289750   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:45:30.564567   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
E1210 00:45:47.491108   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-029725 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m57.650978485s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-029725 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (178.15s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-029725
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-029725-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-029725-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (61.298242ms)

                                                
                                                
-- stdout --
	* [multinode-029725-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-029725-m02' is duplicated with machine name 'multinode-029725-m02' in profile 'multinode-029725'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-029725-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-029725-m03 --driver=kvm2  --container-runtime=crio: (40.59026819s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-029725
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-029725: exit status 80 (206.995827ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-029725 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-029725-m03 already exists in multinode-029725-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-029725-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.69s)

                                                
                                    
x
+
TestScheduledStopUnix (110.15s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-119260 --memory=2048 --driver=kvm2  --container-runtime=crio
E1210 00:50:47.491545   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-119260 --memory=2048 --driver=kvm2  --container-runtime=crio: (38.546669616s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-119260 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-119260 -n scheduled-stop-119260
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-119260 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1210 00:51:07.069426   86296 retry.go:31] will retry after 106.459µs: open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/scheduled-stop-119260/pid: no such file or directory
I1210 00:51:07.070601   86296 retry.go:31] will retry after 219.539µs: open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/scheduled-stop-119260/pid: no such file or directory
I1210 00:51:07.071771   86296 retry.go:31] will retry after 290.311µs: open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/scheduled-stop-119260/pid: no such file or directory
I1210 00:51:07.072909   86296 retry.go:31] will retry after 210.351µs: open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/scheduled-stop-119260/pid: no such file or directory
I1210 00:51:07.074070   86296 retry.go:31] will retry after 544.62µs: open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/scheduled-stop-119260/pid: no such file or directory
I1210 00:51:07.075214   86296 retry.go:31] will retry after 665.038µs: open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/scheduled-stop-119260/pid: no such file or directory
I1210 00:51:07.076364   86296 retry.go:31] will retry after 1.63151ms: open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/scheduled-stop-119260/pid: no such file or directory
I1210 00:51:07.078601   86296 retry.go:31] will retry after 2.40637ms: open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/scheduled-stop-119260/pid: no such file or directory
I1210 00:51:07.081824   86296 retry.go:31] will retry after 1.322416ms: open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/scheduled-stop-119260/pid: no such file or directory
I1210 00:51:07.084023   86296 retry.go:31] will retry after 2.474062ms: open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/scheduled-stop-119260/pid: no such file or directory
I1210 00:51:07.087255   86296 retry.go:31] will retry after 5.554327ms: open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/scheduled-stop-119260/pid: no such file or directory
I1210 00:51:07.093482   86296 retry.go:31] will retry after 12.944772ms: open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/scheduled-stop-119260/pid: no such file or directory
I1210 00:51:07.106719   86296 retry.go:31] will retry after 12.494588ms: open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/scheduled-stop-119260/pid: no such file or directory
I1210 00:51:07.119944   86296 retry.go:31] will retry after 12.725727ms: open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/scheduled-stop-119260/pid: no such file or directory
I1210 00:51:07.133190   86296 retry.go:31] will retry after 22.706328ms: open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/scheduled-stop-119260/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-119260 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-119260 -n scheduled-stop-119260
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-119260
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-119260 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-119260
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-119260: exit status 7 (71.770156ms)

                                                
                                                
-- stdout --
	scheduled-stop-119260
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-119260 -n scheduled-stop-119260
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-119260 -n scheduled-stop-119260: exit status 7 (65.305233ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-119260" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-119260
--- PASS: TestScheduledStopUnix (110.15s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (203.11s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1857411767 start -p running-upgrade-993049 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1857411767 start -p running-upgrade-993049 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m0.486541733s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-993049 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-993049 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m21.139139147s)
helpers_test.go:175: Cleaning up "running-upgrade-993049" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-993049
--- PASS: TestRunningBinaryUpgrade (203.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-971901 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-971901 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (86.972189ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-971901] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (90.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-971901 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-971901 --driver=kvm2  --container-runtime=crio: (1m29.998257149s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-971901 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (90.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (37.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-971901 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-971901 --no-kubernetes --driver=kvm2  --container-runtime=crio: (36.249060105s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-971901 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-971901 status -o json: exit status 2 (207.623384ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-971901","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-971901
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-971901: (1.033978211s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (37.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-971901 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-971901 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.987085654s)
--- PASS: TestNoKubernetes/serial/Start (28.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-796478 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-796478 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (105.960985ms)

                                                
                                                
-- stdout --
	* [false-796478] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20062
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1210 00:54:39.549484  124510 out.go:345] Setting OutFile to fd 1 ...
	I1210 00:54:39.549742  124510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:54:39.549753  124510 out.go:358] Setting ErrFile to fd 2...
	I1210 00:54:39.549760  124510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1210 00:54:39.549942  124510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-79135/.minikube/bin
	I1210 00:54:39.550535  124510 out.go:352] Setting JSON to false
	I1210 00:54:39.551494  124510 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":9430,"bootTime":1733782649,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1210 00:54:39.551552  124510 start.go:139] virtualization: kvm guest
	I1210 00:54:39.553732  124510 out.go:177] * [false-796478] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1210 00:54:39.555598  124510 notify.go:220] Checking for updates...
	I1210 00:54:39.555969  124510 out.go:177]   - MINIKUBE_LOCATION=20062
	I1210 00:54:39.557236  124510 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1210 00:54:39.558402  124510 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20062-79135/kubeconfig
	I1210 00:54:39.559593  124510 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-79135/.minikube
	I1210 00:54:39.560817  124510 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1210 00:54:39.562090  124510 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1210 00:54:39.564053  124510 config.go:182] Loaded profile config "NoKubernetes-971901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1210 00:54:39.564232  124510 config.go:182] Loaded profile config "kubernetes-upgrade-481624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1210 00:54:39.564403  124510 config.go:182] Loaded profile config "running-upgrade-993049": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1210 00:54:39.564553  124510 driver.go:394] Setting default libvirt URI to qemu:///system
	I1210 00:54:39.601337  124510 out.go:177] * Using the kvm2 driver based on user configuration
	I1210 00:54:39.602501  124510 start.go:297] selected driver: kvm2
	I1210 00:54:39.602519  124510 start.go:901] validating driver "kvm2" against <nil>
	I1210 00:54:39.602534  124510 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1210 00:54:39.604707  124510 out.go:201] 
	W1210 00:54:39.605875  124510 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1210 00:54:39.606989  124510 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-796478 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-796478

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-796478

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-796478

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-796478

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-796478

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-796478

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-796478

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-796478

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-796478

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-796478

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-796478

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-796478" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-796478" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 10 Dec 2024 00:54:40 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.72.245:8443
name: running-upgrade-993049
contexts:
- context:
cluster: running-upgrade-993049
user: running-upgrade-993049
name: running-upgrade-993049
current-context: running-upgrade-993049
kind: Config
preferences: {}
users:
- name: running-upgrade-993049
user:
client-certificate: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/running-upgrade-993049/client.crt
client-key: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/running-upgrade-993049/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-796478

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-796478"

                                                
                                                
----------------------- debugLogs end: false-796478 [took: 2.851769974s] --------------------------------
helpers_test.go:175: Cleaning up "false-796478" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-796478
--- PASS: TestNetworkPlugins/group/false (3.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (97.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1954261338 start -p stopped-upgrade-988830 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1954261338 start -p stopped-upgrade-988830 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (49.581689652s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1954261338 -p stopped-upgrade-988830 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1954261338 -p stopped-upgrade-988830 stop: (1.53942645s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-988830 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-988830 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.020133323s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (97.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-971901 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-971901 "sudo systemctl is-active --quiet service kubelet": exit status 1 (193.33458ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (28.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
E1210 00:55:09.289329   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.004939009s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (13.192315402s)
--- PASS: TestNoKubernetes/serial/ProfileList (28.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-971901
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-971901: (2.822252874s)
--- PASS: TestNoKubernetes/serial/Stop (2.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (20.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-971901 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-971901 --driver=kvm2  --container-runtime=crio: (20.953712276s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (20.95s)

                                                
                                    
x
+
TestPause/serial/Start (109.4s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-190222 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-190222 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m49.402106644s)
--- PASS: TestPause/serial/Start (109.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-971901 "sudo systemctl is-active --quiet service kubelet"
E1210 00:55:47.491451   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-971901 "sudo systemctl is-active --quiet service kubelet": exit status 1 (198.433883ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-988830
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (133.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-584179 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-584179 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (2m13.240551158s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (133.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (115.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-274758 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1210 00:59:52.360591   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:00:09.289261   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-274758 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m55.644448595s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (115.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-584179 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f6fc0c1f-b6a7-40ac-9e10-2a68f5b02115] Pending
helpers_test.go:344: "busybox" [f6fc0c1f-b6a7-40ac-9e10-2a68f5b02115] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f6fc0c1f-b6a7-40ac-9e10-2a68f5b02115] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004670818s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-584179 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (77.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-901295 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-901295 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m17.648906196s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (77.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-274758 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c35aa3bc-4f78-49dc-91c3-77935e26dc65] Pending
helpers_test.go:344: "busybox" [c35aa3bc-4f78-49dc-91c3-77935e26dc65] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1210 01:00:47.491108   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [c35aa3bc-4f78-49dc-91c3-77935e26dc65] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004516142s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-274758 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-584179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-584179 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-274758 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-274758 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-901295 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [feb99993-d8af-450e-9cd9-8702ee7de075] Pending
helpers_test.go:344: "busybox" [feb99993-d8af-450e-9cd9-8702ee7de075] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [feb99993-d8af-450e-9cd9-8702ee7de075] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004488095s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-901295 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-901295 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1210 01:02:10.566626   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-901295 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (641.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-584179 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-584179 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (10m41.389735128s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-584179 -n no-preload-584179
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (641.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (601.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-274758 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-274758 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (10m1.571343786s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-274758 -n embed-certs-274758
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (601.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (5.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-094470 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-094470 --alsologtostderr -v=3: (5.289912462s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (5.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094470 -n old-k8s-version-094470
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-094470 -n old-k8s-version-094470: exit status 7 (64.686034ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-094470 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (563.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-901295 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1210 01:05:09.289687   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:05:47.491297   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:10:09.289696   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:10:47.491455   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-901295 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (9m22.778368756s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-901295 -n default-k8s-diff-port-901295
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (563.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-967831 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-967831 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (43.466767375s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (107.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-796478 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-796478 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m47.181825441s)
--- PASS: TestNetworkPlugins/group/auto/Start (107.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (88.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-796478 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-796478 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m28.903903978s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (88.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-967831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-967831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.012727278s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-967831 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-967831 --alsologtostderr -v=3: (11.805209961s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-967831 -n newest-cni-967831
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-967831 -n newest-cni-967831: exit status 7 (65.560807ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-967831 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (53.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-967831 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1210 01:30:09.288970   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-967831 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (53.038513375s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-967831 -n newest-cni-967831
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (53.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-796478 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-967831 image list --format=json
I1210 01:30:20.512105   86296 config.go:182] Loaded profile config "auto-796478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-796478 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wg4rf" [393578b5-0239-47f4-a2c0-62941d17bf87] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-wg4rf" [393578b5-0239-47f4-a2c0-62941d17bf87] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004756647s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-967831 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-967831 -n newest-cni-967831
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-967831 -n newest-cni-967831: exit status 2 (262.41138ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-967831 -n newest-cni-967831
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-967831 -n newest-cni-967831: exit status 2 (274.75365ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-967831 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-967831 -n newest-cni-967831
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-967831 -n newest-cni-967831
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-r2pj7" [5286154f-c4ff-4d6d-a83a-34184b75c271] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.008066579s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (79.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-796478 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-796478 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m19.737609691s)
--- PASS: TestNetworkPlugins/group/calico/Start (79.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-796478 "pgrep -a kubelet"
I1210 01:30:27.679646   86296 config.go:182] Loaded profile config "kindnet-796478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-796478 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9htqw" [fdb3fad7-347c-43b2-a64e-daebd93cd1eb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9htqw" [fdb3fad7-347c-43b2-a64e-daebd93cd1eb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005152509s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-796478 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-796478 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-796478 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-796478 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-796478 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-796478 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (80.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-796478 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-796478 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m20.900166224s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (80.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (86.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-796478 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1210 01:30:47.491186   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/addons-327804/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:30:50.828652   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-796478 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m26.06057801s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (86.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (122.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-796478 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1210 01:31:01.070443   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:31:21.552692   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-796478 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m2.14748231s)
--- PASS: TestNetworkPlugins/group/flannel/Start (122.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-mrdv6" [33c7f65b-09ef-4d01-8780-46d37ab47bc6] Running
E1210 01:31:45.846750   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:31:45.853259   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:31:45.864765   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:31:45.886226   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:31:45.927619   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:31:46.009862   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:31:46.171447   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:31:46.493209   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:31:47.135264   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:31:48.417617   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00455341s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-796478 "pgrep -a kubelet"
I1210 01:31:50.963433   86296 config.go:182] Loaded profile config "calico-796478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-796478 replace --force -f testdata/netcat-deployment.yaml
E1210 01:31:50.979877   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:149: (dbg) Done: kubectl --context calico-796478 replace --force -f testdata/netcat-deployment.yaml: (1.237184202s)
I1210 01:31:52.207012   86296 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1210 01:31:52.230834   86296 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gqxbz" [26430f26-3e7d-453d-b6f4-99daa8949679] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1210 01:31:56.101191   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-gqxbz" [26430f26-3e7d-453d-b6f4-99daa8949679] Running
E1210 01:32:01.112527   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:32:01.118885   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:32:01.130239   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:32:01.151620   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:32:01.193475   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:32:01.275010   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:32:01.436612   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:32:01.758914   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:32:02.400268   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:32:02.514680   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/no-preload-584179/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:32:03.682432   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004499744s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-796478 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-796478 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-796478 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-796478 "pgrep -a kubelet"
I1210 01:32:07.501376   86296 config.go:182] Loaded profile config "custom-flannel-796478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-796478 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-s49hr" [0a7eafef-55fc-4015-ac1b-8735610d0a66] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1210 01:32:11.365783   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/default-k8s-diff-port-901295/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-s49hr" [0a7eafef-55fc-4015-ac1b-8735610d0a66] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.008763045s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-796478 "pgrep -a kubelet"
I1210 01:32:13.607779   86296 config.go:182] Loaded profile config "enable-default-cni-796478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-796478 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5ppvd" [2677cac4-11ba-4ad9-8a80-9591c7656274] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5ppvd" [2677cac4-11ba-4ad9-8a80-9591c7656274] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.008062181s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-796478 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-796478 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-796478 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (90.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-796478 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1210 01:32:26.824552   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-796478 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m30.737950679s)
--- PASS: TestNetworkPlugins/group/bridge/Start (90.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (16.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-796478 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-796478 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.172745875s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1210 01:32:42.095831   86296 retry.go:31] will retry after 858.097997ms: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-796478 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (16.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-796478 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-796478 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-c4jbd" [06715cdf-a60d-4600-b706-0f9222b64d2a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004667912s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-796478 "pgrep -a kubelet"
I1210 01:33:02.377129   86296 config.go:182] Loaded profile config "flannel-796478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-796478 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9dpck" [6c32b32b-d8c7-4be8-a875-7904e4a06add] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9dpck" [6c32b32b-d8c7-4be8-a875-7904e4a06add] Running
E1210 01:33:07.785944   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/old-k8s-version-094470/client.crt: no such file or directory" logger="UnhandledError"
E1210 01:33:12.364868   86296 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/functional-551825/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004189372s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-796478 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-796478 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-796478 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-796478 "pgrep -a kubelet"
I1210 01:33:53.734323   86296 config.go:182] Loaded profile config "bridge-796478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-796478 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-p54sk" [51e99df7-e03b-456b-aae0-be64cd0a4690] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-p54sk" [51e99df7-e03b-456b-aae0-be64cd0a4690] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.00399463s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-796478 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-796478 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-796478 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (39/314)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.2/cached-images 0
15 TestDownloadOnly/v1.31.2/binaries 0
16 TestDownloadOnly/v1.31.2/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.3
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
137 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
138 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
139 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
142 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
143 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
259 TestStartStop/group/disable-driver-mounts 0.15
266 TestNetworkPlugins/group/kubenet 4.63
274 TestNetworkPlugins/group/cilium 3.11
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-327804 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-371895" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-371895
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-796478 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-796478

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-796478

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-796478

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-796478

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-796478

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-796478

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-796478

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-796478

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-796478

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-796478

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-796478

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-796478" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-796478" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-796478

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-796478"

                                                
                                                
----------------------- debugLogs end: kubenet-796478 [took: 4.483251016s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-796478" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-796478
--- SKIP: TestNetworkPlugins/group/kubenet (4.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-796478 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-796478

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-796478

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-796478

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-796478

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-796478

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-796478

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-796478

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-796478

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-796478

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-796478

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-796478

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-796478" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-796478

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-796478

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-796478

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-796478

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-796478" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-796478" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20062-79135/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 10 Dec 2024 00:54:40 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.72.245:8443
name: running-upgrade-993049
contexts:
- context:
cluster: running-upgrade-993049
user: running-upgrade-993049
name: running-upgrade-993049
current-context: running-upgrade-993049
kind: Config
preferences: {}
users:
- name: running-upgrade-993049
user:
client-certificate: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/running-upgrade-993049/client.crt
client-key: /home/jenkins/minikube-integration/20062-79135/.minikube/profiles/running-upgrade-993049/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-796478

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-796478" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-796478"

                                                
                                                
----------------------- debugLogs end: cilium-796478 [took: 2.976001322s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-796478" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-796478
--- SKIP: TestNetworkPlugins/group/cilium (3.11s)

                                                
                                    
Copied to clipboard